Launching Baby Registry on

Walmart Labs launched a new baby registry experience at the beginning of April. I was part of the engineering team that rewrote the web application and services. Walmart doesn’t ship new customer-facing products frequently; this presents an interesting case study into how a large corporation creates something new.

Engineering efforts began in late 2018: a new team of mobile, service and web folks transitioned from well-staffed legacy applications or joined specifically for the project. I found myself looking for web work after deciding to leave the core mobile team (more on that later), and registry sounded like a fun return to my old JavaScript stack.


The MVP’s customer entrypoint centered around an interactive chat that translated users’ preferences to a fully stocked registry. This also (hopefully) makes registry creation a bit more fun for users. Services borrowed an engineer for the initial work, a generic chat service with canned questions using GraphQL and Apollo Server via hapi. Our team quickly found the codebase inflexible as requirements shifted. Chat composed a small portion of the experience: we also needed several queries and mutations to power the clients’ views.


One of our fantastic iOS engineers introduced us to the concept of “backend-for-frontend,” where a single service collates responses from multiple APIs. Clients only request data from backend-for-frontend, improving network overhead and caching. Fortunately, iOS, Android and web maintain similar user interfaces, so all clients could rely on the same queries and mutations to power their experiences. We crafted a schema that sensibly mapped services’ responses for the clients.

Separately, I decided that our original chat design didn’t meet our needs. A more flexible workflow presented itself:

| Question |---> Answer
+----------+       |
     ^             |
     |             |

The previous answers are passed into the next question, which is a function that returns an answer. This allows for question branching and the possibility of a more dynamic series of responses.


I wanted to explore TypeScript due to its recent praise. TypeScript was new to many folks on the team, including me. After a week of learning the basics, I found myself more productive writing TypeScript compared to JavaScript: refactors were a breeze (especially with VS Code’s language integration), unit tests required less assertions due to type safety, and adding features felt natural.

Web Application

The web team started development in November 2018 using React, redux, and Walmart’s homegrown framework, electrode. We lost our senior engineer to another team early on, so I assisted with code reviews while working on services. After the services team gained another staff engineer, I transitioned over to web full time.

More TypeScript

Buoyed by the benefits of TypeScript on services, we migrated our web codebase to TypeScript. The front-end has more complexity due to the large DOM and React API surface areas. We finally fully migrated to TypeScript in January – not without a handful of any safety escapes. This unlocked massive development speedup over the following weeks.

We opted for a minimal GraphQL client to avoid the overhead of Apollo’s client. The service relied on gql2ts for build-time response alignment with the schema. This assisted our web client development as we were able to use the same schema-derived types. For example:

type Registry {
  metadata: RegistryMetadata!
  items: [RegistryItem!]!
type RegistryMetadata {
  date: String!
  gender: RegistryGender!
  title: String!
  # ...
type RegistryItem {
  id: ID!
  name: String!
  price: Float!
  quantity: Int
  # ...

This translates to these TypeScript interfaces:

export interface IRegistry {
  metadata: IRegistryMetadata
  items: IRegistryItem[]
export interface IRegistryMetadata {
  date: string
  gender: RegistryGender
  title: string
  // ...
export interface IRegistryItem {
  id: string
  name: string
  price: number
  quantity: number | null
  // ...

The web client only requires fields necessary for its UI. Using RegistryItem as an example, we only need name and price, so we only request those fields:

import { IRegistry, IRegistryItem } from 'my-types'

export type RegistryFields = Omit<IRegistry, 'items'> & {
  items: Pick<IRegistryItem, 'name' | 'price'>

export const registryFields = `{
  metadata: {
    # ...
  items {

The RegistryField type is paired with the registryFields schema over a genericized fetch-based wrapper, and RegistryField is used in a redux reducer’s state. The alignment of types-to-request is a bit fragile: changes require modifying both RegistryFields and registryFields. It does, however, presents huge benefits: the client now aligns correctly with the API, and changes result in a breaking build instead of runtime errors.

A Web of Problems

Product and design focused on mobile applications, leaving web as an afterthought. Our design department had a high turnover rate: at least three different designers owned visuals throughout the project, leaving us with incomplete mockups and no UX specification. While we had a wealth of web development expertise, most were new to the many challenges of web development at Walmart Labs. The result? Engineer was behind big time as a strict launch deadline approached.

My lovely engineering manager’s continued efforts to warn leadership of the problems eventually paid off when we received reinforcements, including an old MRN cohort, Chris. We re-prioritized the backlog for a bare bones MVP, and began hacking.

At the end of Februrary we lacked functional pages for any part of the experience. Chris and I worked evenings and met for weekend coffees and code. Through the web engineering team’s tireless efforts we delivered: everyone made personal sacrifices (several ~60 hour weeks on my part) to ship this thing. Despite discovering new requirements, changing features’ designs, encountering serious ADA compliance issues, and finding a handful of major bugs, we were ready to ship by the deadline.

When we finally ramped up to 100% it was a relief. Everything worked! The redesign has already increased new baby registry creation significantly, and we received some great press.

Project Recap

What did we accomplish? What did we learn?

  • Sweating details at the expense of a foundation guarantees problems: Initial efforts focused on the entrypoint to the experience when we should have built up base functionality and integrations. This hurt us towards the end as we encountered several missing pieces.
  • Waterfall doesn’t work: Departments functioned in isolation instead of working as a team. We realized this post-launch during retro meetings with product, research, design and engineering. Waterfall inhibits design from ensuring their solutions work and prevents researchers from validating the product addresses customers’ needs, ultimately burdening engineering to hit inflexible deadlines. This promotes burnout, which we don’t want.
  • TypeScript was essential: We wouldn’t have shipped on time without TypeScript’s safety, which protected us from bugs throughout development. In fact, it saved us so much time that we shipped a major feature originally cut from the MVP’s scope. We also had the benefit of strongly typed GraphQL schema coupled with TypeScript definitions: wiring up the client to the API was seamless. “I’ve never seen anything like it. It just worked!” exclaimed Chris.
  • Firsts for As far as I know, we shipped the first customer-facing web application written in TypeScript. It was one of the first to be powered by a GraphQL service.

What Next?

The web team must repay some post-MVP technical debt (increasing test coverage, improving analytics). We’ll listen to our users’ feedback through data collection and research to hone the product into something that increasingly helps customers save money and live better.

Aside from user-facing features, we discovered some thorny areas within Walmart Labs’ developer ecosystem, including underwhelming service documentation and subpar developer tooling. I personally find this energizing: we can adopt aspects of startups (agility, use of existing open source solutions) and improve our tooling and frameworks, which will result in happier engineers, healther inter-sourcing, faster feature delivery and bug fixes, and ultimately better user experiences. We can also impart our technology findings to the others: good initial technology choices (TypeScript, GraphQL) made us successful, and I’d like to encourage their use throughout the greater organization.

Sound exciting? Good news: we’re hiring in Portland, Oregon. Check out our job listings on or shoot me an email for more details.

Gulpifying Jekyll

It’s important to use the right tools for the job. Investing in good infrastructure and setup reduces future fiddling and makes actual work happen faster.

This post is a walkthrough for setting up a Gulp-driven Jekyll site. Jekyll is a static site generator, which turns organized posts and pages written in Markdown into a bunch of static HTML files. It’s makes a site extremely fast for visitors because there’s no server-side scripting, database, etc. Gulp is an efficient JavaScript task runner.

I use Jekyll to power Beer Review. At some point I fired up generator-jekyllrb, a Yeoman generator, to compile Sass, minify code, and optimize images within a Grunt-based workflow. It served me well, but builds run slowly, and the setup has tasks that I don’t need. Plus, I’ve grown to prefer the code-over-configuration approach of Gulp.

Set Up

Let’s get ready. Here’s what you’ll need:

  • Ruby (already installed)
  • Jekyll (2.5.3 as of this writing): gem install jekyll
  • Libsass: brew install libsass
  • Node and NPM: brew install node
  • Gulp: npm install -g gulp

(Commands assume you’re on a Mac and have Homebrew installed.)

First, we need a Jekyll project to work on. Use an existing one or make a new one by running jekyll new [project_directory], replacing [project_directory] with your desired directory name.

Now, we need a starter gulpfile.js and package.json. I really enjoy the generator-gulp-webapp: it’s maintained by the Yeoman team, and it always has the latest and greatest stuff. We’ll borrow their files.

Here’s how retrieve this generator’s files (alternatively, download them here):

  1. Install Yeoman: npm install -g yo
  2. Install the generator: npm install -g generator-gulp-webapp
  3. Make a new temporary directory and cd into it
  4. Use Yeoman to generate a new project: yo gulp-webapp
  5. Follow the prompts
  6. Stop Yeoman from installing dependencies

We’re ready to set up the Jekyll project directory. Move all Jekyll-assocated files and directories into a new directory named app (to follow the Yeoman standard). A couple exceptions: keep the configuration file (_config.yml), any READMEs, Gemfile, etc. in the project root. It should look something like this:


Now, move gulpfile.js and package.json from the temporary gulp-webapp directory into your Jekyll project directory. You can optionally move the dotfiles (.jshintrc, .editorconfig, etc.) if you want their functionality. It directory structure should now look like this:


Once everything’s in place, run npm install in the Jekyll directory to install the project’s dependencies.


First up is the project’s stylesheet (assuming Sass):

  1. Go into your root Sass file (app/css/main.scss in a default Jekyll install) and remove the front matter (everthing between and including the ---s). This is necessary for running Sass through Jekyll, but it chokes up regular Sass.
  2. Ensure the styles task in the gulpfile.js points to the right file:

    -  return gulp.src('app/styles/main.scss')
    +  return gulp.src('app/css/main.scss')
  3. Add the app/sass_ directory to Sass's includes:

    -       includePaths: ['.'],
    +       includePaths: ['.', 'app/_sass'],

Building Jekyll

Jekyll needs to know about the new directory structure. In keeping with Yeoman standards, the site’s code lives in app and the built and bundled code lives in dest. Jekyll doesn’t need worry about images or styles, so let’s set the appropriate excludes. Add these to your _config.yml file:

source: "app"
destination: "dist"
exclude: ["img", "css", "_sass", "js"]
keep_files: ["img", "css", "js"]

We’ll need a way to fire up Jekyll through Gulp. gulp-shell works perfectly for this. Install it: npm install gulp-shell --save-dev. Then, add a jekyll task to the gulpfile.js:

gulp.task('jekyll', function () {
  return gulp.src('_config.yml')
      'jekyll build --config <%= file.path %>'
    .pipe(reload({stream: true}));

Wire up the new task in your gulpfile.js:

- gulp.task('html', ['styles'], function () {
+ gulp.task('html', ['styles', 'jekyll'], function () {
- gulp.task('html', ['styles'], function () {
+ gulp.task('serve', ['styles', 'jekyll'], function () {

You’ll also want to change your gulp watches:[
-     'app/*.html',
-     'app/scripts/**/*.js',
+     'app/js/**/*.js',
-     'app/images/**/*',
+     'app/img/**/*',
-     '.tmp/fonts/**/*'
    ]).on('change', reload);

+'app/**/*.{md,markdown,html}', ['jekyll']);

This will ensure BrowserSync reloads the server when your Jekyll files change.

Other Changes

Make sure your feed.xml isn’t overridden in the extras task:

  return gulp.src([
+   '!app/feed.xml',
  ], {

I also dropped the fonts task and any references to Bootstrap, which comes from the generator-gulp-webapp by default.

A note on Base URL

Beer Review operates from a base url (/beer-review/). Unfortunately, because of the new setup, Jekyll’s baseurl configuration can’t be relied upon. I found a workaround that involves string replacing. It isn’t the prettiest, but it works.

npm install gulp-replace --save-dev

Add this to the html task:

+   var baseurl = 'beer-review'
+   var htmlPattern = /(href|src)(=["|']?\/)([^\/])/gi;
+   var htmlReplacement = '$1$2' + baseurl + '/$3';
+   var cssPattern = /(url\(['|"]?\/)([^\/])/gi;
+   var cssReplacement = '$1' + baseurl + '/$2';

      .pipe($.if('*.html', $.minifyHtml({conditionals: true, loose: true})))
+     .pipe($.if('*.html', $.replace(htmlPattern, htmlReplacement)))
+     .pipe($.if('*.css', $.replace(cssPattern, cssReplacement)))

Replace baseurl with your desired path.

That’s that! See the complete gulpfile.js here. Happy building!

Moving To MRN

I’m very excited to announce that I’m joining the web team at the Mind Research Network as a software engineer. MRN is non-profit focused on academic research of mental illness and brain disorders. They’re based in Albuquerque, New Mexico, but I shall remain in Portland as a remote employee.

These transitions are always tough. I’m leaving Electric Pulp, one of the oldest web firms in the country. I recognize this last year’s projects to be some of my very best. I’m fortunate to have met and worked with such talented people.


Client work can be wearying. Constant context changing, rapid timescales, concerned clients…it’s a lot of things to juggle. There’s rarely a chance to revisit and improve on old work. Agency front-end developers build up a very particular set of skills: proficiency typically results in less challenging work. I’ve done it for four years, and I’m ready to try something new.

What Now

MRN’s web team works on a researcher-facing web app that’s fairly large and, frankly, showing its age. The team plans to restructure the app and update the design. I’ll help by fixing bugs, testing code, and developing front-end features (thus the recent JavaScript reads). I’m especially thrilled to put my design experience to use and assist with user interface design. There’s much to do, and I’m excited to get started.

Jeremy Keith on the Style Guide Podcast

The Style Guide Podcast, hosted by Anna Debenham and Brad Frost, is quickly becoming one of my favorites. They interview designers from organizations and teams that utilize, predictably, a style guide in their work.

The most recent episode features Jeremy Keith of Clearleft. It’s great episode because Jeremy is able to profoundly put into words the struggle of designing for web in the present day. While I highly suggest listening to the entire episode (41 minutes), here’s part of the conversation that I found particularly provoking (starting at 11:55):

Anna: Do you tend to work straight in code, or do you kind of come up with mockups first and then get them built?

Jeremy: There’s nearly always mockups first. I guess the question is how long you spend refining those mockups…

There’s three reasons a mockup could exist. One is that it’s for buy-in. It’s basically for sign-off. You’re mocking something up to present it to a decision maker who then says thumbs up or thumbs down…

Use number two is its a deliverable for a front-end developer, so in other words it’s something a visual designer gives to a front-end developer to get turned into code. That’s a separate use case. Now here’s problem number one is that mockups made for the first use case end up getting used for the second. So if you’re designing for sign-off everything’s perfect. Everything lines at the top and the bottom…everything looks beautiful…Well, everyone’s just going to get disappointed, right? The developer’s going to get frustrated because it’s not accurate, the designer’s frustrated because, “hey, that doesn’t look like my mockup,” and the client is frustrated because that doesn’t look like what they were presented with…

And then there’s a third use of a mockup which is for a visual designer to think…they have a tool that they’re comfortable with, like PhotoShop, Sketch, Illustrator, whatever, and its the fastest way for them to get ideas down is to create a mockup.

I think that’s extremely insightful. Everyone agrees that a mockup isn’t the website, but this artifact has different purposes in the process. Jeremy continues:

They’re three very different use cases…[A]nd yet what happens is a single mockup will end up kind of doing all three…

So this is the problem…we found in general is that there’s this mismatch, I guess, of expectations. And this is why I love Dan’s [Mall] idea of deciding in the browser…Yeah, you do stuff in PhotoShop or Sketch or whatever you’re comfortable with. It’s more about that third use case, use whatever tool is comfortable with you. But you don’t go to the client and say approval or not approval until you’ve got it in web browsers until you’ve got it in code. Because that’s just so much more accurate to reality.

Wow. Jeremy puts into words the things that we struggle with so well. While even he admits that they’re still refining their process, I think it’s a step in the right direction. The approach sounds more true to the medium.