Tag Archives: graphql

GraphQL: Reaping the Benefits

In my last post on our adoption of GraphQL, I discussed the process our engineers took in evaluating and trialing GraphQL as a technology for improving how we develop APIs. Throughout that process, we confirmed that GraphQL solves or improves upon several of the shortcomings we identified with the way we traditionally developed APIs. In this post, I’ll enumerate those shortcomings and how starting to build several new APIs with GraphQL has led to numerous benefits.

Documenting our API

We’ve always faced the challenge of documenting the structures and data types of our API’s responses and expected request formats. Lack of good documentation places a challenge on our mobile developers, who sometimes would end up reading through our backend Ruby code trying to figure out just what type a particular data point would have in a response. We tried a few different solutions, such as handwriting API documentation in Google docs or code comments and describing our structure/types using a YAML-based format.

These approaches worked for the most part, but suffered from issues like:

  • Requiring manual engineering effort to stay up-to-date
  • Supporting response types/structures out-of-the-box, but not requests

Since writing an API with GraphQL requires that you define a statically typed schema, there are several great tools available to automatically generate documentation from your schema. We went with GraphiQL since it comes out-of-the-box with the graphql-ruby gem and easily mounts into a Rails app. So far, both web and mobile engineers have appreciated having complete and always up-to-date API documentation at their fingertips.

Sending the Right Data Points in Fewer Requests

Before GraphQL, we often struggled to balance the number of our API endpoints and the size of their responses. Initially, we had dedicated endpoints for each type of resource (such as meals tracked or messages received). But this often required our client apps to make multiple requests to get the data they needed, which could easily turn into a performance problem, particularly for our mobile apps when used on cellular data networks.

The next evolution was API endpoints tailored to particular pages or screens in the app. This allowed all the data for a particular section of an app to be retrieved in one request. But inevitably discrepancies would arise between platforms, and one platform might end up with data it didn’t need. For example, we sometimes decide to initially launch new features in only one of our mobile apps or only in our web app, so that we can learn about the use and value-add of the feature before investing in it more heavily. But with page/screen-based API endpoints, it becomes harder to exclude data from the responses to apps that don’t use it, and to avoid the associated backend data loading and processing.

GraphQL helps us with both the amount of requests and the data points loaded. A client app can load several different types of resources in one GraphQL query over one HTTP request. Additionally, since GraphQL only responds with the data points, or “fields” that the client app requested in a query, we can easily avoid loading data on an app that doesn’t need it.

Increasing Ease-of-Use for our Mobile Engineers

With our REST-based endpoints, our iOS and Android engineers have traditionally needed to write strongly-typed classes in Swift or Java/Kotlin to represent the data sent back from our API before they could start consuming data from a new endpoint. With Apollo’s native mobile libraries, they can instead have these classes automatically generated from our API’s schema and the GraphQL queries and mutations they’ve written. This means less time before they can start loading data, and less human-generated code for them to maintain.

Catching Integration-level Bugs

Ensuring web and mobile client apps are making valid requests to our API has traditionally been time consuming to set up. Across both web and mobile, we’ve relied on often expensive end-to-end tests to run our apps against a running server. While these tests provide a high level of confidence that the integrations between our apps and server are sound, they often aren’t written until late in the feature development process, and existing ones aren’t usually run until an engineer pushes their work to our CI system. This increases the time from a bug being introduced in either the API or a client app to when it’s caught.

GraphQL improves on this situation due to its strongly-typed nature. Because tooling can statically analyze queries and mutations to validate them against our API schema, we can pick up on bugs more quickly than with end-to-end tests. On mobile, the automatic class generation process mentioned before will fail at compile time if they write an invalid query or mutation, or if we accidentally introduce a breaking change on the backend. We’ve also set up eslint-plugin-graphql to do the checks for our web frontend.

Centralizing the Definition of a Resource Type

In the past when we built page-based API endpoints, we sometimes found that the same data needed to be available in multiple endpoints for different pages (e.g. on our home page and on our Progress page), and in slightly different formats. This resulted in duplicate sources-of-truth for how a particular type of resource could be formatted in API responses, and more time writing similar code and tests.

In GraphQL, you’re encouraged to define a single type for a given resource (e.g. “meal” or “blood pressure reading”). Various fields on other types can return instances or lists of instances of that resource, but that resource is still implemented in the API by one type. That type can contain all the possible data a client might want to have access to, since GraphQL will only return the data points that the client explicitly queries for. Being defined in one place rather than several also makes it less likely that bugs will be introduced in the representation of a data type.

GraphQL provides a nice balance between the organization of RESTful APIs and the convenience of page-based APIs by allowing clients to mix-and-match the types and amount of data pulled in a given query or mutation result, according to the client’s needs.

Simplifying Data Retrieval and Storage on Web

For our participant and health coach-facing web apps, we’ve been using React and Redux for several years now. While they’ve given us a big productivity and maintainability boost, we found that they sometimes fall short when it comes to retrieving and managing data from our API. React is not a data loading library, so it gets a free pass. Redux, out of the box at least, is not a data loading library either, but rather a predictable state manager that does a great job at managing app and UI state that various disparate components need to access.

You can leverage other libraries to add data loading capabilities to Redux. But you’re often left writing a lot of repetitive code around making API requests and managing loading, success, and error states. You also have to figure out the best way to store your data and manage the relationships between different pieces of data, as well as work together to keep everyone’s code consistent. There are many libraries out there to help with these various concerns, but no one in particular has risen above the others as the best solution, so you’re left fighting decision fatigue.

We found that Apollo Client and its React integration for web bring some advantages over how we were doing data loading and management with Redux:

  • Request Lifecycle: Not only does Apollo make the API request for you, it automatically re-renders your React component when the requests starts loading and upon completion, whether in success or error.
  • Data Normalization: With GraphQL, data comes back from the API in a nested format, with an object of one type of data containing one or more instances of other types of data, and so on. Apollo automatically normalizes this data into a store in which each object is stored under a key based on the object’s type and ID. This makes it easy to look up and update a particular object without having to traverse a nested data structure.
  • Standardization: Apollo’s API is more opinionated than Redux and its associated libraries. The patterns it provides make it easier for different developers to write more consistent code.
  • Reduced state management code: While Apollo still requires some work on your part to update the client-side store in some cases (add and delete), it can be simpler than what you might do in a Redux state tree since you can pull specific objects out of its normalized store by ID or by query.

Conclusion

Overall, using GraphQL has been a boon to development, both in building and consuming APIs. These benefits make the learning curve and cost of adoption worthwhile.

In a future post, I’ll delve into the technical details around the tips, tricks, and watchpoints we’ve discovered so far when working with GraphQL. Stay tuned!

Thank you to the following fellow Omadans for their help with GraphQL adoption and/or this blog post: Franck Verrot, Chris Constantine, Ethan Ensler, Alexander Garbowitz, Scott Stebleton, Austin Putman, Jonathan Wrobel

Exploring GraphQL at Omada

We build a lot of APIs at Omada. We build them to power the web, iOS, and Android apps our participants use to interact with their health coaches, take their lessons, track their data, and more. We build them to power the internal web app our health coaches use to understand what motivates our participants and help them make better decisions, as well as the internal UIs our support teams and operations managers use.

Traditionally, we’ve built APIs using a mostly RESTful architecture and the conventions native to the Ruby on Rails framework that powers our backends. More recently, engineers from a couple of teams started exploring alternatives. After enumerating and considering the shortcomings of our current way of doing things, we decided to explore if adopting GraphQL would help solve some of our challenges. We experimented with and evaluated GraphQL through incremental adoption, starting with a minor internal facing feature, and building up to it powering the next major feature for our participants.

In this post, I’ll cover:

  1. The process we used to get started with GraphQL.
  2. The strategies we used to overcome GraphQL’s learning curve.

In an upcoming post, I’ll cover the ways in which we discovered GraphQL solved some of our problems, including API documentation, data loading efficiency, mobile workflows, request correctness, and frontend state management.

Adoption Process

Omada Engineering has long held a bi-weekly meeting where engineers from across all teams and levels can come together to discuss cross-engineering technical initiatives, shared challenges, and new ideas about technology and processes. For some things, a separate group of engineers will be assembled to dig deeper into a particular area. After discussing how we could improve the way we build APIs, we decided to form one such working group to analyze our current practices and investigate potential improvements.

When this group first met, we identified several shortcomings with our current API architecture and technology choices, including inconsistent documentation for client-side engineers, data loading inefficiencies, and complex frontend data management. We considered several solutions, including GraphQL and JSON API. We decided to go forward with GraphQL because of its robust tooling, growing community, and built-in type safety.

Internal First

The first feature we used GraphQL for was a view in our customer support-facing internal app that displayed a paginated list of the blood pressure readings a participant has taken, which we were building out to support our program’s expansion into hypertension. We chose this as a relatively straightforward feature for trying out a new technology (as opposed to the more complex UI health coaches get for reviewing blood pressure readings).

After releasing the customer support UI and confirming that it was bug-free, we continued over the next few months to roll out several more internal features using GraphQL, mostly new additions to our health coaching tools. During this process, the Coaching Engineering Team experimented with a few different ways of structuring GraphQL-related code on the backend and frontend, and with how best to leverage the libraries we’d chosen to get us going with GraphQL, graphql-ruby and Apollo Client.

We hope to share some of these learnings in an upcoming blog post on tips, tricks, and watchpoints we discovered while adopting and learning GraphQL.

Mobile, the Next Frontier

Those of us on the backend and frontend web side who were advocating for GraphQL knew that if it didn’t work for our mobile teams, it wouldn’t work for our company. In order to find out how well GraphQL suited their needs, a couple engineers paired with or consulted with our lead iOS and Android engineers to help them prototype a simple feature as a test case for GraphQL. They investigated automatic code generation, security implications, and testability of Apollo’s GraphQL libraries for iOS and Android. Though the prototypes were not meant to go to production, they gave us enough confidence that we’d identified the most important implications of GraphQL adoption on mobile.

Making a Bet on GraphQL

After all the initial work to get to know GraphQL and try it out on our various platforms, we were left wondering what the next step was. So far, we had only shipped GraphQL usages to production that were part of our internal and coaching applications. In order to truly find out if GraphQL would work for us long-term, we needed to try it out on a participant-facing feature built in both the web and mobile apps. We needed to make sure we were really ready for our GraphQL API to take on the additional traffic and feature complexity, as well as to provide a level of stability necessary for serving our participants as a provider of digital care. We also wanted to be prepared to deprecate our GraphQL API without breaking mobile app versions still in use in the wild if we decided that GraphQL wasn’t for us.

The chance came along when our product team and one of our visionary engineers got together to propose a bold new feature, a new participant community experience based around shared interests or challenges. This feature was going to be sufficiently complex for both the backend/web and mobile developers that we believed it would serve as a true test of GraphQL. This also meant we needed to finish the infrastructure work needed to fully productionize our GraphQL API, including:

  • Limiting acceptable query complexity to avoid denial-of-service type attacks.
  • Restricting schema introspection to authorized users.
  • Integrating GraphQL API request-specific log data into our backend logging aggregation setup.
  • Capturing performance data from our GraphQL API.

I’m happy to say we’re close to the initial launch of this new feature using GraphQL, and we remain satisfied with it as a tool for improving the way we build APIs.

Scaling the Learning Curve

As awesome as GraphQL is, it and its associated libraries and tooling definitely come with a learning curve, especially for engineers used to a RESTful architecture and the default Rails conventions.

We leveraged the following strategies to get our engineers up-to-speed:

  1. Tech Talk: Omada engineers frequently prepare and deliver tech talks to their fellow engineers, usually on Tuesdays during lunch. Earlier this year, I gave a talk to introduce GraphQL to the rest of Omada Engineering.
  2. Pair Programming: Engineers at Omada frequently utilize pair programming to help us come up with better solutions, share knowledge, and decrease the need for separate code review. It provides a natural setting for one engineer to teach another about a new technology, and it helped us to spread GraphQL knowledge amongst the teams using it.
  3. Learning Time: Engineers at Omada are encouraged to take a couple of hours each week to invest in their professional development, whether it be through doing tutorials, watching conference talks, or something else. For us engineers who wanted to delve deep into GraphQL, this dedicated time provided a great opportunity to level up. Additionally the Coaching Engineering team lead and manager arranged for the whole team to invest a day on working through a GraphQL tutorial together.

Conclusion

While Omada still has a ways to go in our journey with GraphQL, we’ve made a solid start and are already seeing the benefits. I’m excited to see our first GraphQL-powered, participant-facing feature go live soon, and to see how our use of GraphQL evolves over the coming months.

Stay tuned for the next post on our GraphQL journey, in which I’ll detail how GraphQL helped us solve several problems with API development.

Thank you to the following fellow Omadans for their help with GraphQL adoption and/or this blog post: Franck Verrot, Chris Constantine, Ethan Ensler, Alexander Garbowitz, Scott Stebleton, Austin Putman, Jonathan Wrobel