Micro-interactions in Scoot's Fleet App

One of the first big initiatives I pushed for at Scoot was the focus on our heaviest users, field technicians fixing scooters on the streets. We created a "Fleet App" for all their tasks in the field. This case study explores a key feature that I led, along with help from a talented junior designer and user centric product lead.

The current state of tooling availabe to field technicians was an outdated single page web app. There was a long list of improvements to make - but the most valuable feature was the ability to search for, and turn on vehicles in the field.

Step 1: Feature Prioritization

Even though there was obvious value to provide, we had to ensure the prioritiy was high enough. I led our entire team through a prioritization exercise to collect other potential projects, to ensure we were focusing on the right high value project.

Next: Scope Problem and Value

One of the main set of actions within our mobile fleet app was locating and turning on vehicles, and performing various checks to them throughout their shift. In collaboration with our product lead and designer, we started mapping out the "Vehicle Actions" to better understand how they were accompishing these goals.

Validation of Value

Through our weekly workflow shadowing, we realized how painful it was to perform these repetative tasks, all without any user feedback. A normal scenario would be like the photo below where Drew, a SF field technician, would lock a kick scooter to a deployment location. This involves searching for a vehicle ID, confirming a 100% battery charge, sending an unlock command, refreshing the page for a confirmation, and so on. This is normally performed hundreds of times every week. Any improvement to this process would mean thousands of saved hours for operations resources.

Next: Benchmark Comparable Workflows

We wanted to collect different approaches for solving complex actions on a mobile device. We looked at variety of apps. I find this to be a great way to get some inspiration before sketching out ideas.

Learning Through Iteration

Our first iterations tried to solve too many scenarios at once. We quickly learned that we needed a simpler system to launch the vehicle actions from multiple entry points, like searching for a vehicle or while reviewing a vehicle's open maintenance issues.

Prototype 1

We were curious how a stacked button list launched from the top right of each vehicle page would perform. Some of our bigger assumptions were around the discoverability of the vehicle actions within that list.

P1 "works" but doesn't scale

General feedback on prototype 1 was that it worked - but the stacked button approach doesn't scale well. It becomes harder to read the button labels as more actions are added. So we wanted to push it a little further. Prototype 2 took the "floating action button" approach that google has popularized. We created this prototype and tested it with some of our users.

P2 100% Invalidated

Despite being an elegant solution, zero out of five users were able to find them. We collected our learnings on a trello board and discussed the learnings with the entire team during our weekly design review. It was overwhelmingly clear that the floating action button was not appropriate and we needed to continue iterating.

Many Iterations later, Validation

Multiple iterations later we tested this prototype. We had assumptions about a grid system of buttons that we were keen on testing. It would pop up straight from the map - for quick and easy access to vehicle actions. We tested it with 5 technicians and then collected our learnings and synthesized them.

Were were also iterating on our validation process so we collected our learnings in a rainbow spreadsheet style this time, instead of trello. The thought was that since we were having less qualitative discussions now, it would be easier to use a more checklist-easy format.

Validated!

The grid system with easy access to actions was a huge success. 5 out of 5 field technicians could perform all major tasks. And the concept of revealing "Reserving" and "On for [field-tech-name]" as banner indicators was a huge improvement from the current set of tools. With this, field tech's would be able to see who's working on which vehicles without having to directly communicate. Below is the evolution from left to right.

Scoot Fleet App

Implement and then Gauge Success

After building and shipping in Scoot's 3 cities, we started tracking some performance metrics we set via our mobile analytics platform Amplitude. We wanted to see a 50% increase in user adoption for the fleet app (away from the existing tool). We also wanted to track the number of rides that were greater than 12 hours, or "forgotten and held" each month. Our hypothesis was that the better visual system to see which vehicle is "reserved" for work would reduce that, and increase the number of available vehicles for public use.

Shipping a Successful Feature

We learned a lot about how our users interact with vehicles in the field, while building and sharing empathy within our team. Engineers were motivated by the iterative process and eager to build the feature. Even our field technicans were asking when they could use the feature they saw during prototype testing. In the end, we were not only able to ship a valuable feature, but were also able to show how adoption rates for the new tool were increasing to our stakeholders.

← Back
to all work