Clipboard Health operates in more than 600 different marketplaces, each with its own unique needs and each with a different set of optimizations that, if implemented, would maximize the outcomes for us, the nurses we serve,  and the healthcare facilities that trust us to provide those nurses. We are firm believers that “pretty good” is not good enough. To exceed that standard in every market, we are building a system that allows us to implement market-specific policies that tailor interventions to the needs of individual markets. This involves putting together an “If X, do Y” protocol for a variety of situations that automatically identifies market conditions we can improve and implements a known intervention. From there, we gather data on the results and further tune the optimizations to be even more successful.

The document below is real - it represents a week’s worth of our work toward what we call Market Tuning. We use these documents to keep everyone up-to-date on the experiments we are running and the results we are seeing so all our teams can work together to refine and reiterate our efforts.

We think this will be of particular interest to those from a Finance background as it gives a peek into the inner workings of how we are going a step beyond the pricing and market-optimization methods of other multi-sided marketplaces. Enjoy!

Marketplace Tuning 9/14/2022

The agenda

Read through the experiment log**,** this document, and the attached supplementary documents.

***Note for external readers: One of the failure modes of experiments is that they tend to fade into the background when things are busy (and things are nearly always busy). Another failure mode - or at least something that works sub-optimally - is an experiment that yields an important result but is only understood by the person running it.

Our experiment log solves both problems by providing a log of all running experiments, documents needed to understand them, start dates, expected end dates, and results. That’s tied to an expectation that our managers stay familiar with the log at all times. This means that any given experimenter has accountability (since people will ask them about progress) and that nearly everyone in the company is aware of what they learn/are learning at any given time.***

Discussion

We’ll follow the outline below for the doc. I have added three headings corresponding to the main sections. If your work falls under one of the headings below, please answer the relevant questions under it.

  1. Portfolios

*(Note for external readers: Portfolios are separately maintained documents that assess, track, and strategize all the assembled aspects of high-level problems. These problems are usually stated in terms the customer might use to describe them, like “CBH isn’t meeting my staffing needs” or “CBH isn’t reliable enough”.

Within each portfolio are summaries of how we are addressing those problems, any improvements we’ve made, and prioritization of each piece of the puzzle. From a portfolio, it’s easy to filter down through links to each experiment and initiative related to the general problem the portfolio addresses and see specifics related to that particular portion of our efforts.)*

1. In tuning, they’ll provide an update on their portfolio thinking based on the following questions.
    1. Has your thinking about your portfolio as a whole evolved since our last tuning? Are you using any new frameworks to prioritize the problems at hand? Can you describe them?
    2. What has changed in your portfolio since our last tuning? Did you add or remove any items and why? Has your thinking about any of the items changed?
  1. Toolkits. Toolkit owners will update on experiments they’re running and any interventions they’re taking systemwide
    1. They’ll answer the following questions.
      1. What experiments are you running? What have you learned so far? What are you planning to do differently? If they’ve been successful, are we deploying them systemwide? By when?
      2. Are you planning on launching any new experiments that you want to share? Can you link their proposals to the doc?
      3. Are you deploying any new strategies systemwide? What’s the progress? What will the impact be?
  2. Market-level Tuners. Each tuner will write an update on the status of their markets. They’ll discuss the experiments they’ve been running and their learnings as well as determine which efforts are ready for deployment across all markets facing the problem.
    1. They’ll answer the following questions.
      1. Has your thinking about the markets you’re focused on changed?
      2. What experiments are you running? What have you learned so far? What are you planning to do differently? If they’ve been successful, are we deploying them systemwide? By when?
      3. Are you planning on launching any new experiments that you want to share? Can you link their proposals to the doc?
      4. Are there any successful experiments that have yet to be deployed system-wide? What’s blocking their deployment?
  3. Discussion. We normally write discussion items during the meeting, but I encourage everyone who is attending the meeting to write larger notes in this section before the meeting. We’ll have the usual discussion as well.

Portfolios

We don’t have updates to portfolios this week but Nancy Yang will have a supply acquisition portfolio by our next marketplace tuning session.

Toolkits

Attendance Score - Jaron

Note for external readers: If a healthcare facility believes a nurse they contacted through our app is covering a shift, they stop looking. When the nurse then fails to work the shift, they are left with a gap in their ability to provide the healthcare their patients need. This is a big deal, and it’s something we take very seriously.