GraphQL Tips: The Backend

Welcome back to the saga of Omada Engineering's adventures with GraphQL. In previous posts, I discussed our processes for learning and adopting GraphQL and the benefits it brought us. This time, I'll share two technical tips we discovered while adding GraphQL support to our Ruby on Rails API server using the graphql-ruby gem. First, I’ll cover the code organization structure we chose and the benefits it gives us in terms of discoverability and maintainability. Second, I’ll demonstrate the solution we implemented for adding GraphQL-specific details to our server request logs.

Code Organization

By default, graphql-ruby generates the directory app/graphql/types for you to place the Ruby classes that define the types for your GraphQL schema. It also gives you the app/graphql/mutations directory for the Ruby classes that define your GraphQL mutations. We realized that adding all types and mutations to these directories without any further organization would quickly become a maintenance burden. So we chose to organize type and mutation classes into Ruby modules (and the corresponding directories on the filesystem) based on the category of data the types apply to.

For types, this often means grouping together classes for the following:

  • A GraphQL type for a core part of our business domain (e.g. a community Discussion)
  • A corresponding GraphQL input type
  • A collection type for representing paginated lists
  • Any associated GraphQL enums

For example, the classes associated with our Discussion type are grouped in one directory like so:

  • app/graphql/types/
    • discussion/
      • discussion_input_type.rb
      • discussion_topic_type.rb
      • discussion_topics_status_enum.rb
      • discussion_type.rb
      • discussions_collection_type.rb
      • discussions_sort_key_enum.rb

When defining one of these classes, we place it in a Ruby module to match the directory. For example, DiscussionType:

module Types
  module Discussion
    class DiscussionType < Types::Base::BaseObject
        # ...

For mutations, we group actions related to a core data type in one directory/Ruby module. For example, we have the following mutation classes for our BloodPressureReading type:

  • app/graphql/mutations/
    • blood_pressure/
      • blood_pressure_reading_toggle_filter.rb
      • blood_pressure_reading_update.rb

Using these patterns, we keep the code for our GraphQL mutations and types organized and related classes close together.

Breaking Up the Query Type Implementation

In GraphQL, the root of your schema's types is the Query type, which provides fields for returning data for all of your types. The graphql-ruby gem generates a GraphQL type class at app/graphql/types/query_type.rb for defining your Query type. While it makes sense for your GraphQL schema to have a root type, defining all the fields on the Query type in one Ruby class would definitely lead to maintenance difficulties, as you would need to define methods for loading all your data types in one place. To keep us from ending up with a giant Query class, we took advantage of a graphql-ruby feature called Resolvers.

A resolver class defines the signature and data lookup code for one field of a GraphQL type, including the field's return type and those of any arguments it takes. For each field on our Query type, we've created a resolver class to implement it.

For example, our resolver for our Query's accountProfile(account_id: Int!) field looks similar to the following:

module Resolvers
  class AccountProfileResolver < GraphQL::Schema::Resolver
    type Types::AccountProfile::AccountProfileType

    argument :account_id, Integer, required: true

    def resolve(account_id:)
        # Load account...
        # Check if current user has permission to access account data
        # Return account

And so our QueryType class becomes simply a list of fields:

module Types
  class QueryType < Types::Base::BaseObject 
    field :account_profile, resolver: Resolvers::AccountProfileResolver
    # Other fields...

While the graphql-ruby docs do recommend alternative approaches to code organization other than resolvers, we felt resolvers solved the problem best, as they allow the field signature definition and data lookup code to live in the same place, while keeping the QueryType class as minimal as possible.

Logging Query Details

In order to have access to better aggregate data on the queries and mutations our GraphQL API receives, we needed to work out a strategy to get more than the raw query string into the structured JSON we record in our general server request logs. We wanted a JSON representation of the query or mutation received so we could aggregate log entries to, for example, get a count of all requests that queried a certain field in our GraphQL schema. The challenge here was how to get that data out of graphql-ruby so that we could record it with the rest of the request data.

We found that we could use graphql-ruby's Ahead-of-Time AST Analysis API. The library parses the incoming GraphQL query into an abstract syntax tree, and this API will run your code for each piece of a query, such as every field it accesses. To use this API, you create a class that extends GraphQL::Analysis::AST::Analyzer and implements one of the callback methods that class supports. In our case, we implemented on_leave_field so that we could record the name of each field an incoming query or mutation accessed. You also implement a result method to return the result of your analysis, which we'll see how to access later.

Our class to collect a representation of each field accessed is as follows:

class LoggingGraphqlQueryAnalyzer < GraphQL::Analysis::AST::Analyzer
  def initialize(query_or_multiplex)
    @nodes_map = {}
    @root_entries = []
    @operation_node = nil

  def on_leave_field(node, parent_node, _visitor)
    @nodes_map[node] ||= { name: }

    case parent_node
    when GraphQL::Language::Nodes::Field
      @nodes_map[parent_node] ||= { name: }
      parent_entry = @nodes_map[parent_node]
      parent_entry[:fields] ||= []
      parent_entry[:fields] << @nodes_map[node]
    when GraphQL::Language::Nodes::OperationDefinition
      @root_entries << @nodes_map[node]
      @operation_node = parent_node

  def result
      type: @operation_node&.operation_type,
      operation_name: @operation_node&.name,
      query: { fields: @root_entries }

Let's consider how it would operate on the following query:

query DiscussionQuery {
  discussion(discussionId: 1) {
    author {

The on_leave_field method receives a node object and a parent_node object. A node represents a piece of a GraphQL query, such as a field or a field argument. It has a name method which returns a string name, such as the name of a field. In our implementation of on_leave_field, we create a hash to store the name of the field the current node represents, and then store that hash in a larger hash called @nodes_map using the node object as the key. This allows us to look up the field-specific hash the next time we encounter that node.

Then we check if the current node's parent node is a field node or an operation node. An operation node represents the root of the query. In our example, its name is DiscussionQuery. Fields whose parent is the operation node are the top level fields our query is accessing, such as discussion in this example.

If the current node's parent is another field, which we can tell if the parent's class is GraphQL::Language::Nodes::Field, we then know it is not a top-level field. So we add the hash representing the current node to a fields array on the hash representing the parent node, which we initialize and add to our @nodes_map if it doesn't already exist. If the hash for the parent node already exists, we add to it, so that a parent field that contains multiple sub-fields will end up with all of them in its fields array.

If the current node's parent is the operation node, which we can tell if the parent's class is GraphQL::Language::Nodes::OperationDefinition, we then know it is a top-level field. So we add its hash to an array called @root_entries, which in our current example would only contain one element with the name discussion. We also store the parent node in the @operation_node instance variable so that we can access information about it in our implementation of result.

In our result method, we put together a final hash containing the data we want added to our request log. It includes the operation type (query or mutation), operation name (DiscussionQuery in our example), and the tree of hashes representing the fields we captured in on_leave_field. This final hash can then be serialized to JSON as part of the larger JSON structure we log for the request.

For this example, the JSON output would be the following:

  "type": "query",
  "operation_name": "DiscussionQuery",
  "query": {
    "fields": [
        "name": "discussion",
        "fields": [
            "name": "title"

            "name": "message"

            "name": "author",
            "fields": [
                "name": "firstName"

To run our analyzer class on each incoming request, we configure it in our schema class:

class GraphqlSchema < GraphQL::Schema

Finally, we need to access the results of the analyzer in our Rails controller that handles incoming GraphQL requests. To do this, we leverage graphql-ruby's Tracing API, which allows you to provide an object that can listen for different events the library fires as it's processing a GraphQL query. It will call your object's trace method with a key for the event and associated data, plus a block that executes the next segment of query processing.

We created a simple class that implements trace and for the analyze_query event saves the result from our query analyzer class so that we can access it from our controller:

class GraphqlRequestTracer
  attr_accessor :log_data

  def trace(key, _data)
    result = yield
    if key == 'analyze_query'
      self.log_data = result.first

In our GraphqlController, we pass an instance of the tracer along with the query context and then append the data from the analyzer to our log data in append_info_to_payload, which our logging system calls to get log data for the current request. It looks something like this:

class GraphqlController < ApplicationController
  def execute
    # ...

    context = {
      # ...
      tracers: [request_tracer],

    @result = GraphqlSchema.execute(query, variables: variables, context: context, operation_name: operation_name)

    render json: @result

    # ...


  def append_info_to_payload(payload)

    return payload unless @result

    payload[:graphql] = {
      status: @result.has_key?('errors') ? 'error' : 'success',
      errors: @result['errors'],
    }.merge(request_tracer.log_data || {})


  # ...


  def request_tracer
    @request_tracer ||=

  # ...

This adds the GraphQL-specific data to our log entry JSON under the graphql key. It was tricky to figure out that the way to get the analyzer data to the controller was through the tracing API, but now that it's in place we could extend it for other query analysis concerns in the future.


So far we’re happy with the approaches we chose for organizing our GraphQL class file structure and separating the implementation of our QueryType into encapsulated classes. On the logging side, our addition of GraphQL-specific details in JSON format makes it easier for us to get aggregate data from our logging system on how often the fields and types in our GraphQL schema are accessed. If you’re using GraphQL with Rails and graphql-ruby like we are, I hope these tips give you a leg up in your GraphQL journey.

GraphQL: Reaping the Benefits

In my last post on our adoption of GraphQL, I discussed the process our engineers took in evaluating and trialing GraphQL as a technology for improving how we develop APIs. Throughout that process, we confirmed that GraphQL solves or improves upon several of the shortcomings we identified with the way we traditionally developed APIs. In this post, I’ll enumerate those shortcomings and how starting to build several new APIs with GraphQL has led to numerous benefits.

Documenting our API

We’ve always faced the challenge of documenting the structures and data types of our API’s responses and expected request formats. Lack of good documentation places a challenge on our mobile developers, who sometimes would end up reading through our backend Ruby code trying to figure out just what type a particular data point would have in a response. We tried a few different solutions, such as handwriting API documentation in Google docs or code comments and describing our structure/types using a YAML-based format.

These approaches worked for the most part, but suffered from issues like:

  • Requiring manual engineering effort to stay up-to-date
  • Supporting response types/structures out-of-the-box, but not requests

Since writing an API with GraphQL requires that you define a statically typed schema, there are several great tools available to automatically generate documentation from your schema. We went with GraphiQL since it comes out-of-the-box with the graphql-ruby gem and easily mounts into a Rails app. So far, both web and mobile engineers have appreciated having complete and always up-to-date API documentation at their fingertips.

Sending the Right Data Points in Fewer Requests

Before GraphQL, we often struggled to balance the number of our API endpoints and the size of their responses. Initially, we had dedicated endpoints for each type of resource (such as meals tracked or messages received). But this often required our client apps to make multiple requests to get the data they needed, which could easily turn into a performance problem, particularly for our mobile apps when used on cellular data networks.

The next evolution was API endpoints tailored to particular pages or screens in the app. This allowed all the data for a particular section of an app to be retrieved in one request. But inevitably discrepancies would arise between platforms, and one platform might end up with data it didn’t need. For example, we sometimes decide to initially launch new features in only one of our mobile apps or only in our web app, so that we can learn about the use and value-add of the feature before investing in it more heavily. But with page/screen-based API endpoints, it becomes harder to exclude data from the responses to apps that don’t use it, and to avoid the associated backend data loading and processing.

GraphQL helps us with both the amount of requests and the data points loaded. A client app can load several different types of resources in one GraphQL query over one HTTP request. Additionally, since GraphQL only responds with the data points, or “fields” that the client app requested in a query, we can easily avoid loading data on an app that doesn’t need it.

Increasing Ease-of-Use for our Mobile Engineers

With our REST-based endpoints, our iOS and Android engineers have traditionally needed to write strongly-typed classes in Swift or Java/Kotlin to represent the data sent back from our API before they could start consuming data from a new endpoint. With Apollo’s native mobile libraries, they can instead have these classes automatically generated from our API’s schema and the GraphQL queries and mutations they’ve written. This means less time before they can start loading data, and less human-generated code for them to maintain.

Catching Integration-level Bugs

Ensuring web and mobile client apps are making valid requests to our API has traditionally been time consuming to set up. Across both web and mobile, we’ve relied on often expensive end-to-end tests to run our apps against a running server. While these tests provide a high level of confidence that the integrations between our apps and server are sound, they often aren’t written until late in the feature development process, and existing ones aren’t usually run until an engineer pushes their work to our CI system. This increases the time from a bug being introduced in either the API or a client app to when it’s caught.

GraphQL improves on this situation due to its strongly-typed nature. Because tooling can statically analyze queries and mutations to validate them against our API schema, we can pick up on bugs more quickly than with end-to-end tests. On mobile, the automatic class generation process mentioned before will fail at compile time if they write an invalid query or mutation, or if we accidentally introduce a breaking change on the backend. We’ve also set up eslint-plugin-graphql to do the checks for our web frontend.

Centralizing the Definition of a Resource Type

In the past when we built page-based API endpoints, we sometimes found that the same data needed to be available in multiple endpoints for different pages (e.g. on our home page and on our Progress page), and in slightly different formats. This resulted in duplicate sources-of-truth for how a particular type of resource could be formatted in API responses, and more time writing similar code and tests.

In GraphQL, you’re encouraged to define a single type for a given resource (e.g. “meal” or “blood pressure reading”). Various fields on other types can return instances or lists of instances of that resource, but that resource is still implemented in the API by one type. That type can contain all the possible data a client might want to have access to, since GraphQL will only return the data points that the client explicitly queries for. Being defined in one place rather than several also makes it less likely that bugs will be introduced in the representation of a data type.

GraphQL provides a nice balance between the organization of RESTful APIs and the convenience of page-based APIs by allowing clients to mix-and-match the types and amount of data pulled in a given query or mutation result, according to the client’s needs.

Simplifying Data Retrieval and Storage on Web

For our participant and health coach-facing web apps, we’ve been using React and Redux for several years now. While they’ve given us a big productivity and maintainability boost, we found that they sometimes fall short when it comes to retrieving and managing data from our API. React is not a data loading library, so it gets a free pass. Redux, out of the box at least, is not a data loading library either, but rather a predictable state manager that does a great job at managing app and UI state that various disparate components need to access.

You can leverage other libraries to add data loading capabilities to Redux. But you’re often left writing a lot of repetitive code around making API requests and managing loading, success, and error states. You also have to figure out the best way to store your data and manage the relationships between different pieces of data, as well as work together to keep everyone’s code consistent. There are many libraries out there to help with these various concerns, but no one in particular has risen above the others as the best solution, so you’re left fighting decision fatigue.

We found that Apollo Client and its React integration for web bring some advantages over how we were doing data loading and management with Redux:

  • Request Lifecycle: Not only does Apollo make the API request for you, it automatically re-renders your React component when the requests starts loading and upon completion, whether in success or error.
  • Data Normalization: With GraphQL, data comes back from the API in a nested format, with an object of one type of data containing one or more instances of other types of data, and so on. Apollo automatically normalizes this data into a store in which each object is stored under a key based on the object’s type and ID. This makes it easy to look up and update a particular object without having to traverse a nested data structure.
  • Standardization: Apollo’s API is more opinionated than Redux and its associated libraries. The patterns it provides make it easier for different developers to write more consistent code.
  • Reduced state management code: While Apollo still requires some work on your part to update the client-side store in some cases (add and delete), it can be simpler than what you might do in a Redux state tree since you can pull specific objects out of its normalized store by ID or by query.


Overall, using GraphQL has been a boon to development, both in building and consuming APIs. These benefits make the learning curve and cost of adoption worthwhile.

In a future post, I’ll delve into the technical details around the tips, tricks, and watchpoints we’ve discovered so far when working with GraphQL. Stay tuned!

Thank you to the following fellow Omadans for their help with GraphQL adoption and/or this blog post: Franck Verrot, Chris Constantine, Ethan Ensler, Alexander Garbowitz, Scott Stebleton, Austin Putman, Jonathan Wrobel

Exploring GraphQL at Omada

We build a lot of APIs at Omada. We build them to power the web, iOS, and Android apps our participants use to interact with their health coaches, take their lessons, track their data, and more. We build them to power the internal web app our health coaches use to understand what motivates our participants and help them make better decisions, as well as the internal UIs our support teams and operations managers use.

Traditionally, we’ve built APIs using a mostly RESTful architecture and the conventions native to the Ruby on Rails framework that powers our backends. More recently, engineers from a couple of teams started exploring alternatives. After enumerating and considering the shortcomings of our current way of doing things, we decided to explore if adopting GraphQL would help solve some of our challenges. We experimented with and evaluated GraphQL through incremental adoption, starting with a minor internal facing feature, and building up to it powering the next major feature for our participants.

In this post, I’ll cover:

  1. The process we used to get started with GraphQL.
  2. The strategies we used to overcome GraphQL’s learning curve.

In an upcoming post, I’ll cover the ways in which we discovered GraphQL solved some of our problems, including API documentation, data loading efficiency, mobile workflows, request correctness, and frontend state management.

Adoption Process

Omada Engineering has long held a bi-weekly meeting where engineers from across all teams and levels can come together to discuss cross-engineering technical initiatives, shared challenges, and new ideas about technology and processes. For some things, a separate group of engineers will be assembled to dig deeper into a particular area. After discussing how we could improve the way we build APIs, we decided to form one such working group to analyze our current practices and investigate potential improvements.

When this group first met, we identified several shortcomings with our current API architecture and technology choices, including inconsistent documentation for client-side engineers, data loading inefficiencies, and complex frontend data management. We considered several solutions, including GraphQL and JSON API. We decided to go forward with GraphQL because of its robust tooling, growing community, and built-in type safety.

Internal First

The first feature we used GraphQL for was a view in our customer support-facing internal app that displayed a paginated list of the blood pressure readings a participant has taken, which we were building out to support our program’s expansion into hypertension. We chose this as a relatively straightforward feature for trying out a new technology (as opposed to the more complex UI health coaches get for reviewing blood pressure readings).

After releasing the customer support UI and confirming that it was bug-free, we continued over the next few months to roll out several more internal features using GraphQL, mostly new additions to our health coaching tools. During this process, the Coaching Engineering Team experimented with a few different ways of structuring GraphQL-related code on the backend and frontend, and with how best to leverage the libraries we’d chosen to get us going with GraphQL, graphql-ruby and Apollo Client.

We hope to share some of these learnings in an upcoming blog post on tips, tricks, and watchpoints we discovered while adopting and learning GraphQL.

Mobile, the Next Frontier

Those of us on the backend and frontend web side who were advocating for GraphQL knew that if it didn’t work for our mobile teams, it wouldn’t work for our company. In order to find out how well GraphQL suited their needs, a couple engineers paired with or consulted with our lead iOS and Android engineers to help them prototype a simple feature as a test case for GraphQL. They investigated automatic code generation, security implications, and testability of Apollo’s GraphQL libraries for iOS and Android. Though the prototypes were not meant to go to production, they gave us enough confidence that we’d identified the most important implications of GraphQL adoption on mobile.

Making a Bet on GraphQL

After all the initial work to get to know GraphQL and try it out on our various platforms, we were left wondering what the next step was. So far, we had only shipped GraphQL usages to production that were part of our internal and coaching applications. In order to truly find out if GraphQL would work for us long-term, we needed to try it out on a participant-facing feature built in both the web and mobile apps. We needed to make sure we were really ready for our GraphQL API to take on the additional traffic and feature complexity, as well as to provide a level of stability necessary for serving our participants as a provider of digital care. We also wanted to be prepared to deprecate our GraphQL API without breaking mobile app versions still in use in the wild if we decided that GraphQL wasn’t for us.

The chance came along when our product team and one of our visionary engineers got together to propose a bold new feature, a new participant community experience based around shared interests or challenges. This feature was going to be sufficiently complex for both the backend/web and mobile developers that we believed it would serve as a true test of GraphQL. This also meant we needed to finish the infrastructure work needed to fully productionize our GraphQL API, including:

  • Limiting acceptable query complexity to avoid denial-of-service type attacks.
  • Restricting schema introspection to authorized users.
  • Integrating GraphQL API request-specific log data into our backend logging aggregation setup.
  • Capturing performance data from our GraphQL API.

I’m happy to say we’re close to the initial launch of this new feature using GraphQL, and we remain satisfied with it as a tool for improving the way we build APIs.

Scaling the Learning Curve

As awesome as GraphQL is, it and its associated libraries and tooling definitely come with a learning curve, especially for engineers used to a RESTful architecture and the default Rails conventions.

We leveraged the following strategies to get our engineers up-to-speed:

  1. Tech Talk: Omada engineers frequently prepare and deliver tech talks to their fellow engineers, usually on Tuesdays during lunch. Earlier this year, I gave a talk to introduce GraphQL to the rest of Omada Engineering.
  2. Pair Programming: Engineers at Omada frequently utilize pair programming to help us come up with better solutions, share knowledge, and decrease the need for separate code review. It provides a natural setting for one engineer to teach another about a new technology, and it helped us to spread GraphQL knowledge amongst the teams using it.
  3. Learning Time: Engineers at Omada are encouraged to take a couple of hours each week to invest in their professional development, whether it be through doing tutorials, watching conference talks, or something else. For us engineers who wanted to delve deep into GraphQL, this dedicated time provided a great opportunity to level up. Additionally the Coaching Engineering team lead and manager arranged for the whole team to invest a day on working through a GraphQL tutorial together.


While Omada still has a ways to go in our journey with GraphQL, we’ve made a solid start and are already seeing the benefits. I’m excited to see our first GraphQL-powered, participant-facing feature go live soon, and to see how our use of GraphQL evolves over the coming months.

Stay tuned for the next post on our GraphQL journey, in which I’ll detail how GraphQL helped us solve several problems with API development.

Thank you to the following fellow Omadans for their help with GraphQL adoption and/or this blog post: Franck Verrot, Chris Constantine, Ethan Ensler, Alexander Garbowitz, Scott Stebleton, Austin Putman, Jonathan Wrobel

Engineering Values, part 2: Mindful Collaboration

Previously on the blog…

This is the second in a series of blog posts exploring the values held by our Engineering department, and how we put them into practice. As a refresher, our Engineering department’s values, in no particular order, are:

  • Sustainability
  • Mindful Collaboration
  • Data-Driven Approach
  • Shipping Software
  • Diversity & Inclusion
  • Learning & Innovation

In the previous post, I went into detail about the value of sustainability. Today I talk about mindful collaboration.

Value: Mindful Collaboration

At first glance, mindful collaboration may seem like a squishy subject that has little to do with software development. But in reality, the opposite is true: we are humans working together to build something we care about. Collaboration is a key feature of a successful company. (Company, after all, literally means multiple people gathered together.) So when we talk about mindful collaboration, we’re talking about building a culture and environment that is optimized for humans working together.

Specifically, this is what mindful collaboration means for Engineering at Omada Health:

  • Leading with empathy
  • Minimization of ego
  • Proactively asking for feedback
  • Having clear expectations

Putting it into practice

Probably the most important characteristic in mindful collaboration is empathy. No matter what we do, we must always consider the needs of other people: our teammates, our stakeholders, our target audience. When my team makes decisions, we must consider the impact of those decisions on other teams. Will there be any side effects? What assumptions and expectations are there about what our team works on? What don’t we know, that other teams might, about something we’re about to do?

We must understand that there will always be tension between requirements, available resources, and time. This often manifests as tension between developers and stakeholders. I believe this tension is healthy: it pushes us outside our comfort zones, which is how we grow as human beings, and it makes us question our assumptions, which helps us better understand each other.

To be an effective collaborator requires awareness of one’s ego, and the ability to set it aside. Doing so enables us to receive constructive feedback without taking it as a personal attack, and to participate in blameless postmortems. It allows us the freedom to acknowledge how we showed up to a meeting or a pairing session: maybe I wasn’t entirely present because I have a lot on my mind, or maybe I was more defensive in a conversation than I intended to be.

In addition to receiving constructive feedback, mindful collaboration includes proactively soliciting feedback, and graciously receiving it. This can happen anywhere: in team retrospective meetings, one-on-one conversations, formal feedback tools, or even in ad hoc hallway conversations. It can be as simple as asking, “Is there something I could have done a little better?” We find our retrospective meetings to be a good forum for this kind of feedback, especially when it comes to finding ways to improve our processes.

Empathy and a proactive feedback culture—one in which feedback is offered in a continuous cycle, in varying forms—allow us to engage in spirited debate, particularly when arguments are backed by data (more on this in a later post!).

When we work together as a team, and especially when multiple teams work together, it is key to set clear expectations of everyone involved. What are we trying to accomplish? By when? We ensure clear communication of expectations by clearly documenting requirements and acceptance criteria as early as possible—and regularly coming back to this documentation as we work on features, to check in on how we’re doing relative to the expectations we set at the start.

This often applies to meetings, as well. When I am running a meeting, I like to build an agenda before it starts. This helps me identify what I hope to get out of it, and will help keep it from going off the rails. (Everyone’s time is valuable.) During the meeting, I ask questions—and encourage others to ask questions—when things are unclear. At the end, if applicable, I run through the conclusions reached during the meeting, so that everyone is on the same page. This also helps ensure that I heard everyone correctly.

What have we learned? How have we benefited?

Doing this effectively, on an ongoing basis, is hard! Mindful collaboration requires mutual trust and a willingness to be vulnerable, on the part of everyone involved. It can feel awkward. It takes practice and reflection, performed in cycles, to get it right.

When we get it right, though, it is awesome. It leads to better function across teams. It helps teams identify potential problems early. Maybe that means a change in direction, or maybe it just means the affected teams have more time to prepare. Either way, everyone has benefited! Moreover, proactively looking for potential effects on other teams builds their trust in my team.

Mindful collaboration requires effective communication, and that needs to take multiple forms, and usually involves some amount of repetition. We’ll often see the same message communicated through both Slack and email, because not everyone uses both. Then there are the tools we use for things like project management, issue tracking, documentation, and so on. Even sticky notes can be effective. And there’s rarely a substitute for face-to-face communication.

The biggest benefit by far is the way it produces an inclusive work environment. When people feel valued, they’re happier. And when people are happier, they’re willing to invest more of themselves in their work. That feeling comes through in the office culture, which makes people want to stick around. And who wouldn’t prefer to work in an environment where their colleagues actively want to be there?


Mindfulness is difficult, and collaboration can be difficult, so it stands to reason that mindful collaboration poses a challenge. It’s complicated, it’s messy, and it takes practice. A lot of practice. But the benefits are so worth the effort: improved cross-team communication and function; a greater feeling of ownership and investment; happier teams; and a more inclusive environment. A team that works well together is a team that will last.

Engineering Values, part 1: Sustainability

At Omada Health, we have a set of values that guide everything we do as a company. This post, however, is not about our company values, but the values held by our Engineering department. These are the principles that drive the work we do as software developers, data scientists, and infrastructure architects. These values are the product of brainstorming by the management team, and were given shape by input and healthy debate across the entire department. (As with any exercise in identifying guiding principles, there was very little that was clear and obvious from the get-go.)

Our Engineering Department’s values, in no particular order, are:

  • Sustainability
  • Data-Driven Approach
  • Mindful Collaboration
  • Shipping Software
  • Diversity & Inclusion
  • Learning & Innovation

Today’s post concerns the first of those, and future posts will go into detail on the others.

Value: Sustainability

When we talk about sustainability, we aren’t referring to ecology or economics, so I’m not going to talk about renewable energy or fair-trade crops. For us, sustainability means pacing ourselves, thinking about the future without losing sight of present needs, and remembering that professional software development is a marathon, not a sprint.

As it applies to Engineering at Omada Health, sustainability means, among other things:

  • Preventing turnover
  • Avoiding knowledge silos
  • Shipping quality, well-tested code
  • Having a strong relationship with our Product teams

Putting it into practice

We’ve put this value into practice in a number of ways, whether it’s promoting a healthy work-life balance, practicing pair programming (it’s our default mode of operation), or taking the time to refactor old code. We practice test-driven and behavior-driven development (TDD and BDD), which we combine with comprehensive monitoring of key performance indicators (KPIs) and system health, to help surface anomalies and outages as quickly as possible.

In service of sustainability, we practice some tenets of Agile software development. The principles that are most important to us:

  • Starting simple and improving iteratively
  • Continuous delivery of working software
  • Being responsive to changing requirements and circumstances, especially when they are beyond our control
  • Allowing our teams to organize and run themselves
  • Face-to-face communication
  • Close alignment between the Product and Engineering teams
  • Regular retrospective meetings to reflect on, and make adjustments to, team processes

How does practicing sustainability help us?

First and foremost: teams made up of happy people are more productive. (Who knew?!) Through promoting a healthy work-life balance, reasonable working hours, a culture of inclusion (more on that in a future post), and building and nurturing cohesive teams, our people enjoy coming to work, and they are motivated to make Omada Health a leader in helping people live free of chronic disease.

On the more technical side, we know from experience that keeping things simple is challenging. How do we decide what needs to be included in an MVP, and what can be added in a later iteration? How do we identify what we need to do now, what we’ll definitely need in the future, and what we might need later on? Answering these questions requires a firm understanding of the problems we’re trying to solve, and putting those answers into practice requires buy-in from stakeholders. (Then there’s the question of identifying your stakeholders!)

Of course, we want the software we build to last. TDD, BDD, and pair programming are effective ways of preventing knowledge silos, documenting expected behavior, and getting input from multiple people. What’s more, we encourage everyone to include refactors of old code if it gets touched through the course of writing new code. And taking the time to do refactors now will make twelve-months-from-now you very happy.

Lastly, doing all these things is impossible if the relationship between the Product and Engineering teams is broken or lacking in trust. We encourage a healthy relationship between Product and Engineering by ensuring that they are constantly working closely together, and always able to communicate. Whether that’s in daily stand-ups, planning sessions, retrospectives, or ad-hoc conversations, the key is open, honest, and clear communication.


At the end of the day, we like feeling proud of the work we do. We want our work to have a profound and lasting impact on the lives of our participants. The surest way to ensure that our work will endure is to practice sustainability across the board, every day.

Reference: 12 Principles Behind the Agile Manifesto

Swift 3 Migration

In March, the iOS team at Omada Health began the process of migrating our app's Swift code from Swift 2.3 to Swift 3. This was a large undertaking for a small team, and we would like to share some of the strategies used and the lessons we learned.

Why to migrate to Swift 3?

Our app was initially written in Objective C, and added Swift early on. By the summer of 2016, most new code was being written in Swift, although there is a considerable amount of Objective C remaining. We chose to postpone adopting Swift 3 for a few reasons: competing priorities, memories of the problems faced moving to Swift 2, and dependencies on libraries that had not yet released versions compatible with Swift 3.

However, as time when on we were aware that delaying migration would mean more problems later. Every line of Swift 2 code added would have to be converted. Many libraries and tools added support for Swift 3 and began dropping support for Swift 2. New releases of Xcode would no longer support Swift 2 at all, including the current release of Xcode. Delaying the migration only meant more Swift 2 code to convert.

Swift 3 Advantages

We would defer to other blogs and resources about more details on Swift 3. For us here is what was more important:

  1. More readable code
  2. Function named parameters

return Weight.humanValueStringForWeightValue(weightValue, includeSuffix: true)
return OPWeight.humanValueString(forWeight: weightValue, includeSuffix: true)
1. Lower cased enums
1. Use Date for bridging from NSDate, as it now can include timezone
1. Date is Comparable
1. is a value type, can be constant
2. Better tooling
3. Faster compiler
4. Able to use Struct on stack instead of Classes on heap
5. 77% size improvement for build product

See more on Swift evolution:

specifically NS Foundation migration:

How we did it

The most basic approach might have been to simply update our libraries to Swift 3 versions, run Apple's conversion tool on our project, fix errors left by the conversion tool until everything compiles, and then spend some amount of time manually testing the app to make sure we haven't introduced bugs. We also considered hiring outside contractors to do the conversion.

We decided against this for several reasons. At Omada our engineering team has been strongly influenced by Extreme programming, a type of agile software development, and we prefer short development cycles and frequent releases. We wanted to avoid having our app in a broken, unreleasable state for an unknown length of time. We also wanted to continually monitor our progress and estimate how much work was remaining. And we don't have a dedicated QA team to find all the bugs. We use a Jenkins server as our continuous integration system to run tests on every checkin.

Gradual Approach

We wanted to have an incremental approach to use tests to help us with the conversion, and we wanted to take advantage of Swift 3 improvements to improve the code while minimizing the scope and risk of change.

We prepared for the migration by adding a basic suite of acceptance tests. These tests are written using KIF to simulate user interactions with the app running against a development api server. This app had been tested in this way before but the previous developers abandoned those tests after they became too brittle and difficult to maintain. Our new tests were very basic, mostly just navigating to the major screens and making sure the app does not crash. With just a few tests we were able to cover about 20% of our code. This complemented a suite of around 1500 unit tests, and would let us sanity check our converted code even before we finished converting the unit tests.

We spiked on converting a few classes to Swift 3 to get a sense of how fast we could convert them, and to learn what kinds of problems might come up. We added some code metrics reporting to our CI process, and were able to estimate that it would take 2 to 6 weeks to do the migration.

When we were ready to start the migration we began by creating a new dynamic framework, allowing us to break our large app down into 2 units:

  • A Swift-only framework with no dependencies on external libraries or the rest of our application, and no bridging headers.
  • The main app that loads our complied Swift framework and builds the rest of our Swift and Objective C code.

With no dependencies or bridging headers the Swift framework complied very fast, and changes to the main app would not require recompilation of the Swift framework. We created a spreadsheet listing all of the Swift files in our app, and a swift3 conversion branch of our git repo. Then we started moving one file at a time into the Swift framework beginning with the smallest most isolated classes, adjusting access control levels of the classes and methods and importing our framework as necessary. After each commit we rebased the swift3 branch on master, converted the moved code to Swift 3, and marked the progress on our spreadsheet.

Through this process we found a lot of dead code that we could just delete! We also found classes that were difficult to move because of their dependencies. Sometimes these classes could be easily refactored to break those dependencies. In other cases we just noted the problem on our spreadsheet and moved on to lower-hanging fruit. We made several passes over our list and it became easier to move more classes as their dependencies had been moved previously. We also began using more sophisticated dependency-breaking techniques. In some places we extracted new protocols that could be defined in our Swift framework and implemented in the main target. In other cases we could move most of a class to the Swift framework and create an extension in the main target.

For example if we had a ProgressViewModel that depends on a Participant for some data, and for whatever reason we don't want to move Participant into our framework:

class Participant {

func firstName() -> String {
return "Matt"

func allWeights() -> [Double] {
return [210, 207, 208, 204, 201]

class ProgressViewModel {

let firstName: String
let weights: [Double]

init(participant: Participant) {
firstName = participant.firstName()
weights = participant.allWeights()

We might remove the dependency on Participant from the definition of ProgressViewModel so that it could be easily moved.

class ProgressViewModel {

let firstName: String
let weights: [Double]

init(firstName: String, weights: [Double]) {
self.firstName = firstName
self.weights = weights

Then we can add an extension of ProgressViewModel in the main app with a convenience initializer that maintains compatibility with our existing code:

import SwiftOmada

extension ProgressViewModel {
convenience init(participant: Participant) {
self.init(firstName: participant.firstName(), weights: participant.allWeights())

After a while we added an additional test target and began moving unit tests for some of the Swift framework's classes into it, converting those tests to Swift 3 to ensure we learned about any difficulties we might encounter testing with Swift 3. We added a jenkins job to build those tests on every push to our Swift 3 branch.

We found this process to have a lot of benefits. We were always aware of how much work was left to do, and we could tell how much work we had accomplished which kept the migration from feeling like an endless slog. We felt confident that we were not introducing new bugs that we would have to hunt down later. We were also knew that it would be possible to interrupt our migration if there was an urgent need to release a bug fix.

As the migration project progressed it became harder to move individual Swift files into the new target. At this point we had around 60 Swift files with too many dependencies to deal with on their own. At this point we began looking for a different strategy.

Revised strategy

We tried using tools that generate charts to visualize the graph of our app's dependencies. They mostly showed what we expected: a big, tangled ball at the center of our application. However there were several overlapping clusters, and we hoped that by cutting in the right places we could separate the ball into 5 or 6 manageable chunks. At that point we would remove all of the code from the main target and add back only what was required to launch the app. Then we would add one chunk at a time, converting each chunk to Swift 3 as we went.

Again we made a spreadsheet listing each file in our main target, which chunk it was a member of, and its status. We observed that these chunks were roughly related to the main view controllers of our app, and often these view controllers were passing objects to each other after being loaded from storyboards, during segues, etc. To address this we started using the dependency injection framework Swinject and its companion SwinjectStoryboard, which one of our developers had used successfully on previous projects. Swinject is used to define a container which can resolve dependencies, and SwinjectStoryboard hooks into the process of instantiating a view controller from a storyboard to provide it with the objects it needs.

import Swinject
import SwinjectStoryboard

extension SwinjectStoryboard {

class func setup() {

defaultContainer.register(UIApplicationAdapter.self) { r in
defaultContainer.register(OPLoginServiceAdapter.self) { r in
let service = OPLoginService()
service.analyticsService = r.resolve(OPAnalyticsServiceAdapter.self)!
return service
defaultContainer.storyboardInitCompleted(OPProgressViewController.self) { r, c in
c.account = r.resolve(OPAccount.self)!
c.accountManager = r.resolve(OPAccountManagerAdapter.self)!
c.accountUpdater = r.resolve(AccountUpdaterAdapter.self)!

In other places we changed our classes to use key value coding to set properties and performSelector to invoke methods.

Along with dependency injection we used key value coding to access coupled properties that were too hard to separate. For example when one view controller sets properties for another one in prepare for segue method:

[self.personalViewController setValue:self.messageBus forKey:@"messageBus"];

or when one model needs to access deep a property of another

let someValue = model.value(forKeyPath:"")

When we need to call a method that we do not want to import we use performSelector:

[account performSelector:@selector(updateMeals)];


Once we finished converting all of the Swift code in our app to Swift 3, we made a new beta release with TestFlight and distributed it to our coworkers. We added a few small features and visible bug fixes with the Swift 3 migration to make it an interesting release for users. While we waited for feedback about the beta, we finished converting the remaining unit tests to Swift 3. After about a week we felt ready to release to the App Store.


Overall it took us 3 developers working full time for 4 weeks to convert roughly 35,000 lines, within our estimated 2-6 weeks work. Our release had a 5 star average review, up from a 4 star average for all previous releases. Our test coverage increased from 50% to 64%, and we learned more about how our code works and structure of the app.

Some technical improvements to our codebase:

  • Swift Framework: more modular design
  • Swift Unit Specs: Can be executed faster without loading the app.
  • Date() to expose bugs coming from the use NSDate without timezone and calendar. Required us to be explicit about Gregorian calendar and current timezone.
  • Open Source: We started to use open source libraries for dates and dependency injection (SwiftDate and Swinject)
  • Cleaner design: Remove monkey patched Foundation and UIKit dependencies. We had to add extensions to Date temporarily while migrating to keep compiler happy.
  • Multiple suites of tests helped us to verify the conversion and keep the existing quality
  1. Unit Specs - fast command line only
  2. Unit Tests - load the main app and UIKit
  3. Legacy Tests - for Objective-C code
  4. Acceptance Tests: a suite of UI tests connecting to the real server running locally in development mode

In hindsight our conversion process would have been easier if we had started the conversion sooner to have less code to convert, modularized the application and started using dependency injection earlier and more consistently.

To review the steps of conversion:

  1. Break down large app into 2 units: dynamic framework and the app.
  2. Create Swift only framework, no Objective-C to have faster compilation with no bridging headers
  3. Make the Main app load the compiled Swift framework and build with Objective-C and the rest of Swift code.
  4. Parallelize work for conversion
  5. Migrate into swift3 branch one class at a time, that compiles on Swift 3 into a framework. We used a Google Spreadsheet to list all Swift code with number of lines and assign it to a developer.
  6. Migrate Tests into Unit Specs to have tests passing validating the code converted
  7. Move more classes until it became too difficult.
  8. Build "naked" app by removing all Swift 2 classes, to build with with only Objective-C compilation linked to Swift 3 framework
  9. Break down dependencies using key value coding and selector strings
  10. Use dependency injection
  11. Re-add Swift classes to the main app as we migrate each one to Swift 3. Another Google Spreadsheet was used here to "divide and conquer" files between multiple developers.
  12. Estimate work by:
  13. Counting classes and lines of code
  14. Converting a few typical classes and tests

How to run an internship program

In a previous post, I explored how running an internship program can benefit the team (and the company) more dramatically than you might expect. Now that you’re totally bought into why interns are awesome, you’re probably wondering how you can run an effective internship program at your company.

How our internships work

Whether the internship is three, four, or six months long, an Omada engineering internship follows the same general format: join the team and pair with everyone, do a solo project for a spell, and then rejoin the team. We’ve had 13 engineering interns so far, and each came to Omada with a different amount of experience with our stack. Having them start with the team gives us a chance to share our practices with them, and dump a heap of knowledge on their heads while they learn their way around the codebase. The solo project then gives them a chance to integrate some of that knowledge, and code review helps us and them see what they are still struggling with. When the intern’s solo is over and they triumphantly return to the team, their growth is evident.


Since interns are short-term employees, they go through an abbreviated application process — first, they complete a coding challenge that has some failing tests, then a general-purpose phone screen with questions about their technical interests and what recent code they’ve written that they are excited about, followed by a remote or in-person pairing session working on a small technical problem.

You clearly aren’t hiring an intern for their experience, so you should have a clear idea about what you are hiring them for. At Omada, we look for interns that ask a lot of good questions, display curiosity about how we ship software, and who seem to be able to pick up new concepts fairly quickly. Algorithms and white boarding sessions wouldn’t give us nearly as much insight into how the candidates actually write software, so we happily avoid them. (As we do with our full-time candidates, as well. Whiteboarding interviews are terrible and people should stop doing them.)


The biggest part of why we’re able to have an intern join the team and write production code immediately is that we do pair programming by default. Depending on the team, this can range from 40% to 90% pairing. Practicing our pairing skills with a junior developer makes us better pairs when we work with all of our teammates.

(Pair programming is where you have two keyboards, two mice, and one computer, and you take turns typing, and you work through the whole problem together. Pairing is not where you look over someone else’s shoulder when they get stuck, or whiteboarding architecture with a group of people, or working on similar code next to someone on separate computers.)

If your company does not currently have a pair programming practice, I wouldn’t recommend starting with pairing with your interns. Pairing across experience level is is a substantially different experience than pairing with an engineer more senior than you or at your level. It’s a skill not to hog the keyboard or go too fast when you’re on a roll, and properly get your pair’s buy-in for your idea when they know a lot less about the codebase than you. Sarah Mei has a great piece on precisely this problem, and you should read it!

(I do recommend you start pairing! Just with people at the same skill level, then later try to expert-mode that is junior/senior pairing.)

If you do pair, some of these things have helped us let our juniors drive more:
* Really doing TDD and ping-pong pairing it (avoiding spikes, where the junior can get lost)
* Having toys and other things for the senior to do with their hands to avoid typing
* Unplugging the senior’s keyboard!

It’s really hard not to grab the keyboard when you don’t know why the test is failing. As much as you can, resist the urge, and work through the mystery with the intern driving. You can do it!

Onboarding & Mentorship

Having regular interns keeps our onboarding processes in tip-top shape. We also make sure that the intern’s laptop is used for coding as soon as possible — usually on the first day, but definitely by the second day of their internship. When you’re pairing, it’s easy to use the machine that’s already set up, but that’s a trap! It’s important for the new person’s environment to be set up ASAP.

Interns are paired with more senior developers who act as a mentor during the internship, answering technical and non-technical questions. The intern’s mentor isn’t the only person who reviews their pull requests, but especially during the solo project, the mentor checks in regularly with the intern to make sure they’re on a reasonable path with their project. Regularly, in our case, has often meant a daily, scheduled check-in, otherwise it doesn’t happen often enough. Having time every day during the solo has helped us prevent our interns from going totally off the rails on their projects. (Of course, if they have questions during the day, they’re welcome to ask any member of the team. We haven’t yet had an issue with an intern asking too many questions.)

Solo Project

Omada engineering interns have a lot of say in what they build during their solo project. We’ve had some interns make tools that never got used (and they still learned things, which is valuable!), but we are especially excited when an intern can learn a lot while working on a project that is useful to the company. We’ve had especially good luck with having interns work directly with a product manager for their solo project, and letting them figure out what to build together.

If you’ve got an intern that really wants to work with a cool new technology that is completely unrelated to your current stack, I’d suggest gently reminding them that the value of their internship is not just in the code they write, but the process of working with your team. Trying out new tech is a great thing to do in your free time, when you don’t have the wealth of resources of a team and customers to build things for!

Code Review

Since we pair program, we don’t normally do code review via pull request. The exception to this is when any engineer is soloing, and so interns on solo projects submit pull requests that are reviewed by their mentor or another person on the team. Since the intern has already had at least a month of pairing with the team, they are usually moderately aware of our practices and style, so the feedback on each pull request isn’t starting-from-scratch education about how we write code at Omada.

If your company doesn’t pair program, you’ll probably want to start with a project that is somewhat less than business critical. It is important, though, that the intern is collaborating with other people at the company as much as any other engineer would. You’ll only reap the benefits of having interns around if they are fully integrated into your typical engineering processes and working on your actual codebase.

Hire them!

At the end of someone’s internship, if they’ve been awesome and we have an opening, we offer them a full-time position. This is an excellent way to further leverage the time investment we’ve made, but we still get a lot out of mentoring the interns that don’t end up in full-time positions.

At Omada, our interns have been fantastic teammates and many have stayed around in full-time positions. Having interns around regularly has helped us improve our communication skills, our codebases, and has really helped us increase the gender diversity on our teams.

(For more on why running an internship is incredibly valuable to the whole team, take a gander at this post.)

Internships: Good for the intern, great for the team

Hiring an intern is obviously beneficial to the intern: they get paid to learn to write better code, they get exposed to working with a team, and they get another line on their resume working toward being a supes profesh software engineer.

Less immediately obvious, though, is how great an intern can be for your team and your company. A good internship program provides an opportunity to look critically at your culture and your codebase to see where the weaknesses of both are.

(It's unfortunately easy to run an internship program without learning from it, but if you set things up properly and ask the right questions, having an intern on your team will pay dividends beyond their comparatively cheap salary.)

I've talked to a lot of people about more and less successful internships they've had or witnessed. For the less succesful ones, the saddest part is when the company didn't look at their process or their code to see what could have contributed to an unproductive internship. It's easy to say "Oh, we just hired a bad intern", but rarely is just the intern the problem.

An internship might feel like a test of a junior engineer's abilities, but it's moreso an audit of your team: how well do you communicate? How complex is your codebase? You will certainly teaching your intern things about programming, but they will show you the things that your team is totally rocking, and the things that you need to work on, both in code and in process.

Benefit #1: Code

We all have parts of our codebase that we know are a mess, that have weird old names due to quirks of history, that are fragile and no one wants to change a thing for fear of subtle bugs. But we get used to them. We have Stockholm syndrome with our codebases, and we've forgotten how silly (or terrible) things are.

We need to feel the pain of explaining to an intern why that model is named the way it is. Or asking them to hold the context of seven classes in their head to understand how this process works. (Having interns come through regularly is helpful here, because any intern will also become numb to the more painful parts of the codebase.)

Having an intern around who will look around the codebase and find patterns to follow will highlight for you which patterns you don't love any more, and remind you to fix them up at the next opportunity.

Benefit #2: Culture

Like explaining a complex bit of code, explaining how your team works and why you do the things you do will help you find gaps in your process. A team that can effectively onboard an intern is set up well to onboard any engineer, so that they can contribute meaningfully as quickly as possible. You'll also learn how good your team is at teaching (or knowledge transfer, for those who like corporate lingo better). Good teaching leads to fewer silos, which leads to higher bus numbers. Yay!

Having an intern-ready culture means your team will have an easier time retaining people who are collaborative and empathetic. At Omada, we use an internship-like program to rotate engineers through other teams. Especially when someone is learning a totally new technology, having a structured format to get started is incredibly helpful. This has the excellent benefit of allowing our teams more opportunities to learn from each other—rotating team members cross-pollinate ideas and practices from their "home" team.


Still not sure that your team is ready, or that you can spare the time?

The most common refrains I've heard from engineers that don't think they should hire an intern yet include:

  • We don’t have the time or resources to train them
  • Interns will write bad code that we’ll then have to maintain forever
  • Everyone on the team is so key to the codebase that we can’t have someone less than senior or else things would fall apart

These are reasonable concerns, but point to larger issues in your engineering organization (beyond the very-real problem that hiring an all-senior engineering team is very expensive and very hard).


If you have fewer than three mid- to senior-level engineers, maybe you should wait on hiring an intern. A team of all interns probably isn't an effective way to build a robust codebase, but a team of all seniors can tend to write more complex, clever code. Create a balanced team.

Running an internship program is absolutely an investment. You do need to take time to train people, but the ROI on your codebase and culture is worth it if you do it right. An internship program that doesn't teach you about the existing team and its code is not worth running.

Code Quality

Your company should already have structures in place that prevent crappy code from getting onto production — probably something like code review or pair programming. If you don't do either of those things, you should probably start! With your full-time engineers, not your interns.

So, when you hire an intern, put those structures to work. Give the intern actual production work, and see how good your team is at giving helpful, thoughtful feedback about the intern's code.

Not enough experience

If your team believes only senior engineers can be beneficial to your codebase, then you are going to have a hard time hiring, and probably have a myriad of other issues. People leave companies, and they take their knowledge of the code with them. Good luck!

Start with one

Being good at teaching means being good at helping all of your engineers level up, which will help them want to stick around. Teaching reinforces what you know, and highlights things you thought you knew but don’t. Being good at information sharing means that you will have fewer silos and higher bus numbers.

I hear often that startups don’t think they have the resources to support interns, but the question is: where do senior engineers start? Not as senior engineers. Providing internships helps us have more balanced teams, and hiring mid- to senior-level engineers is pretty damn difficult in the current market.

It’s likely that your first intern ever will have a worse experience than later interns. You’re going to learn how good your mid-level and senior engineers are at teaching a junior, and which parts of the codebase are really hard to explain to someone just starting their career. But please don’t let that stop you from trying.


All of the reasons above should be enough to start hiring interns. But there’s another benefit that I should mention: you can more easily increase the gender diversity of your team!

  • 16% of software engineers are women (source)
  • 18% of computer science grads are women (source)
  • 38% of bootcamp grads are women (source)

The statistics for ethnic diversity of bootcamp grads is less exciting, but you can still work on the diversity of your teams by posting internship opening in places like the jobs board ofs historically black colleges and univserities, or working with organizations like CODE2040 to hire black and latin@ fellows.

How To

Ready to start an effective internship program? Read on for how to do it.

A brief guide to facilitating retros

One of the cornerstones of our engineering process at Omada is the retro (short for retrospective). A retro is a regularly scheduled time for the team to reflect on the past week and discuss what about our processes we should change to make the next week better. Most of our teams have retros at the same time each week, toward the end of the week.

Retros at Omada started with the engineering teams, and over the years we’ve introduced other parts of the organization to their magic. From the outside, retros can seem like post-mortems or complaining sessions. So it’s been important for us to emphasize that retros are regularly occurring and process-focused. (And that post-mortems and complaining sessions can also be very valuable — they just can't replace retros.)

As new teams started having retros, we realized that we needed a guide to the basics of an Omada retro. Since the facilitator has a major impact on the pace and tenor of the retro, we focused on training more facilitators. (In some teams, the manager or team lead runs the retro every week; other teams rotate facilitators.) I made this one-page cheat sheet outlining how to effectively moderate a retro. Take a look, and maybe even print it out and take it to your next retro!

hand-drawn retro facilitation guide

Here are a few translations and notes!

  • EANABS are Equally Attractive Non-Alcoholic Beverages. We usually have a mix of beer-drinkers, whiskey-drinkers, and non-drinkers at our retros. While alcohol is a fun way to demarcate that this isn’t just a regular meeting, it’s important to make everyone on your team feel included, so if you are serving alcohol, other fun drinks should be available.
  • Learn more about the Five Whys (and why you might ask them) on this wikipedia page.
  • The timeline is assuming a 60-minute long retro and a co-located team (thus the sticky notes and the pens). If your folks are remote, tools like and Retrospectus are useful for collecting people's thoughts and reflections.
  • This is just one style of retro — there are many, many ways to run a retro!

(Here's the guide as a PDF.)

Thank you to Vincent Coste, who gave an internal presentation on our retro practices that inspired me to create this guide!

Solving the ‘Matt Problem’

It’s been a big couple of years at Omada Health. Billions of steps tracked, millions of weigh-ins, two funding rounds and a whole lot of hiring. According to our company intranet, 70% of the company joined after me and I’ve only been here a year. With so many new names to put faces to, you start to run into something I like to call, the ‘Matt Problem’. The ‘Matt Problem’ arises when you receive an email simply signed:


and you have to work out which of the twelve Matts at your company sent it.

We tried giving each Matt a nickname. One Matt had a large beard - Matt, the Beard, another had longer than average hair - Matt, the Hair. But what happens when the next Matt joins the team with even longer hair and larger beard? Matt the Hairiest didn’t stick.

So at our recent company hack day, I tried to tackle this problem by making our own internal Rapportive/Connectifier/FullContact style Google Chrome Extension for Gmail. All of these extensions display extra profile information about the person sending you email right in your inbox. I thought if we could show a smiling profile picture together with their role at Omada Health and a brief introduction, it would bring our community closer, one Matt at a time.

This approach took advantage of the fact that everyone at Omada Health uses Gmail for email, Chrome for web browsing and Parklet to store our profile information.

Without any help from our marketing department, I seamlessly combined ‘Chrome’ and ‘Omada’ to name the project ‘Chromada’ and, of course, created an equally terrible logo.

Chromada Logo
So let’s write some code!

The good news is the team at Streak have released a library called InboxSDK that makes this possible in under fifty lines of JavaScript. Dropbox uses it for their Gmail integration, which according to the Chrome Web Store, has over 3 million active weekly users. The main benefit of the library is that it allows developers to build apps on top of a stable API, rather than having to make changes every time Gmail changes its DOM structure.

Every Chrome extension is essentially a group of JavaScript and CSS files that are loaded by the browser as described in the extension’s manifest.json. The following example describes an extension that will load the InboxSDK library and the Chromada application js and css files when the user visits Gmail ( We also added Parklet ( to the permissions array which allows the Chrome extension to make requests to retrieve employee profile information.

  "manifest_version": 2,
  "name": "Chromada",
  "version": "0.0.1",
  "icons": {
    "16": "images/omada-icon-16.png",
    "128": "images/omada-icon-128.png"
  "permissions": [
  "content_scripts" : [
      "matches": ["*"],
      "js": ["inboxsdk.js", "chromada.js"],
      "css": [ "styles/chromada.css" ]

Next we use the InboxSDK API to load the library. An App ID is required here, so follow the instructions in the documentation to register for one. Once the SDK is loaded, we can use the Conversations.registerThreadViewHandler(handler) method to add a handler that will be called when the user navigates to an email thread in Gmail. The ThreadView object that is passed as an argument to the callback gives us two functions that we are interested in. First, ThreadView.addSidebarContentPanel() lets us add the Chromada html element to the sidebar of the Gmail thread view. Then, we use ThreadView.getMessageViewsAll() to iterate through each email in the thread and listen for individual message state changes. MessageViews change from hidden, expanded and collapsed as the user clicks on each email in a thread. We use this to ensure the employee’s profile in the sidebar stays in sync with the email the user is currently reading.

InboxSDK.load("1.0", "YOUR_APP_ID_HERE").then(function(sdk) {

      // the SDK has been loaded, now do something with it!
       sdk.Conversations.registerThreadViewHandler(function(threadView) {

        var chromadaDiv = document.createElement("div");
        chromadaDiv.setAttribute("id", "chromada");
          el: chromadaDiv,
          title: "Omada Health",
          iconUrl: chrome.extension.getURL("images/omada-icon-38.png")

        threadView.getMessageViewsAll().forEach(function(messageView) {
          messageView.on("viewStateChange", function(event) {

Once we have a reference to a DOM element in Gmail, we can display anything we like. Chromada uses the email address of the sender to look up the full employee profile information in our company intranet, Parklet. A name, title, start date and profile picture can then be added to the sidebar panel. Here’s a screenshot featuring a smiling colleague:

Chromada screenshot

The Chrome Web Store makes distributing private Chrome extensions simple for Google Apps for Work users. On the Developer Dashboard, just select the option to restrict visibility to everyone with an email address. Then it’s a simple one click install.

Deployment settings

After a quick demo at the following all hands MMM (Monday Morning Meeting) and a follow up email to, Chromada was launched. One month later, almost three quarters of the company are weekly active users and Matts all around the office are no longer strangers.

Chrome Web Store screenshot