Redesigning the Netflix API

comments Comments Off on Redesigning the Netflix API
By , September 10, 2023 7:55 pm

This post originally appeared on the Netflix Tech Blog on February 8, 2011.

This is Daniel Jacobson, Director of Engineering for the API here at Netflix. The Netflix API launched in 2008 with a focus on the public developer community. The expectation was that this community would build amazing and inspiring applications that would take Netflix to a new level in serving our members. While some fantastic things were built by this community, including sites like Instant Watcher, the transformational moment for the API was when we started to use it to deliver the streaming functionality to Netflix ready devices. In the last year, we have escalated this approach to deliver Netflix metadata through the API to hundreds of devices. All of this has culminated in tremendous growth of the API, as represented by the following chart:

The tremendous growth of the API over the last year is due to a combination of the increased number of users, more activity by our users, Netflix’s steady adoption of new devices over time, as well as chattier interfaces to the API.

Growing the API by about 37× in 13 months indicates a few things to us. First, it demonstrates the tremendous success of the API and the fact that it has become a critical system within Netflix. Moreover, it suggests that, because it is so critical, we have to get it right. When reviewing the founding assumptions of the API from 2008, it is now clear to us that the API needs to be redesigned to carry us into the future.

Establishing New Goals

In the two-and-a-half years that the API has been live, some major, fundamental changes have taken place, both with the API and with Netflix. I already mentioned the change in focus from an exclusively public API to one that also drives our device experiences. Additionally, at the time of the launch, Netflix was primarily focused on delivering DVDs. Today, while DVDs are still part of our identity, the growth of our business is streaming. Moreover, we are no longer US-only. In October, we launched in Canada with a pure streaming plan and we are exploring other international markets as well. Because of these fundamental changes, as well as others that have cropped up along the way, the goals of the API have changed. And because the goals have changed, the way the API needs to operate has as well.

Decreasing Total Requests

An example of where the current design is inefficient is in the way the API resources are modeled. Today, there are about 20 resources in the API. Some of these resources are very similar to each other, although they each have their own interfaces and responses. Because of the number of resources and the fact that we are adhering very closely to the REST conventions, our devices need to make a series of calls to the APIs to get all the content needed to render the user interface. The result is that there is a high degree of chattiness between the devices and the APIs. In fact, one of our device implementations accounts for about 50% of the total API calls. That same device, however, is responsible for significantly less streaming traffic. Why is this device so chatty? Can we design our API to reduce the number of calls needed to create the same experience? In essence, assuming everything remains static, could the 20+ billion requests that we handled in January 2011 have been 15 billion? Or 10 billion?

Decreasing Payload

If we reduce the number of requests to the API to achieve the same user experience, it implies that the payload of each request will need to be larger. While it is possible that this extra payload won’t noticeably impair performance, we still would like to reduce the total number of bits delivered. To do so, we will also be looking at ways to handle partial response through the API. Our goal in this approach will be to conceptualize the API as a database. A database can handle incredible variability in requests through SQL. We want the API to be able to answer questions with the same degree of variability that SQL can for a database. Other implementations, like YQL and OData, offer similar flexibility and we will research them as well. Chattiness and payload size (as well as their impact on the request/response model) are just two examples of the things we are researching in our upcoming API redesign. In the coming weeks, as we get deeper into this work, we will continue to post our thinking to this blog.

If these challenges seem exciting to you, we are hiring! Check out the jobs on the API team at our jobs site.

Embracing the Differences : Inside the Netflix API Redesign

comments Comments Off on Embracing the Differences : Inside the Netflix API Redesign
By , September 10, 2023 7:50 pm

This post originally appeared on the Netflix Tech Blog on July 9, 2012.

As I discussed in my recent blog post on ProgrammableWeb.com, Netflix has found substantial limitations in the traditional one-size-fits-all (OSFA) REST API approach. As a result, we have moved to a new, fully customizable API. The basis for our decision is that Netflix’s streaming service is available on more than 800 different device types, almost all of which receive their content from our private APIs. In our experience, we have realized that supporting these myriad device types with an OSFA API, while successful, is not optimal for the API team, the UI teams or Netflix streaming customers. And given that the key audiences for the API are a small group of known developers to which the API team is very close (i.e., mostly internal Netflix UI development teams), we have evolved our API into a platform for API development. Supporting this platform are a few key philosophies, each of which is instrumental in the design of our new system. These philosophies are as follows:

  • Embrace the Differences of the Devices
  • Separate Content Gathering from Content Formatting/Delivery
  • Redefine the Border Between “Client” and “Server”
  • Distribute Innovation

I will go into more detail below about each of these, including our implementation and what the benefits (and potential detriments) are of this approach. However, each philosophy reflects our top-level goal: to provide whatever is best for the Netflix customer. If we can improve the interaction between the API and our UIs, we have a better chance of making more of our customers happier.

Now, the philosophies…

Embrace the Differences of the Devices

The key driver for this redesigned API is the fact that there are a range of differences across the 800+ device types that we support. Most APIs (including the REST API that Netflix has been using since 2008) treat these devices the same, in a generic way, to make the server-side implementations more efficient. And there is good reason for this approach. Providing an OSFA API allows the API team to maintain a solid contract with a wide range of API consumers because the API team is setting the rules for everyone to follow.

While effective, the problem with the OSFA approach is that its emphasis is to make it convenient for the API provider, not the API consumer. Accordingly, OSFA is ignoring the differences of these devices; the differences that allow us to more optimally take advantage of the rich features offered on each. To give you an idea of these differences, devices may differ on:

  • Memory capacity or processing power, potentially modifying how much content it can manage at a given time
  • Requirements for distinct markup formats and broader device proliferation increases the likelihood of this
  • Document models, some devices may perform better with flatter models, others with more hierarchical
  • Screen real estate which may impact the content elements that are needed
  • Document delivery, some performing better with bits streamed across HTTP rather than delivered as a complete document
  • User interactions, which could influence the metadata fields, delivery method, interaction model, etc.

Our new model is designed to cut against the OSFA paradigm and embrace the differences across devices while supporting those differences equally. To achieve this, our API development platform allows each UI team to create customized endpoints. So the request/response model can be optimized for each team’s UIs to account for unique or divergent device requirements. To support the variability in our request/response model, we need a different kind of architecture, which takes us to the next philosophy…

Separate Content Gathering from Content Formatting/Delivery

In many OSFA implementations, the API is the engine that retrieves the content from the source(s), prepares that payload, and then ultimately delivers it. Historically, this implementation is also how the Netflix REST API has operated, which is loosely represented by the following image:

The above diagram shows a rainbow of colors roughly representing some of the different requests needed for the PS3, as an example, to start the Netflix experience. Other UIs will have a similar set of interactions against the OSFA REST API given that they are all required by the API to adhere to roughly the same set of rules. Inside the REST API is the engine that performs the gathering, preparation and delivery of the content (indifferent to which UI made the request).

Our new API has departed from the OSFA API model towards one that enables fine-grained customizations without compromising overall system manageability. To achieve this model, our new architecture clearly separates the operations of content gathering from content formatting and delivery. The following diagram represents this modified architecture:

In this new model, the UIs make a single request to a custom endpoint that is designed to specifically handle that request. Behind the endpoint is a handler that parses the request and calls the Java API, which gathers the content by calling back to a range of dependent services. We will discuss in later posts how we do this, particularly in how we parse the requests, trigger calls to dependencies, handle concurrency, support fallbacks, as well as other techniques we use to ensure optimized and accurate gathering of the content. For now, though, I will just say that the content gathering from the Java API is generic and independent of destination, just like the OSFA approach.

After the content has been gathered, however, it is handed off to the formatting and delivery engines which sit on top of the Java API on the server. The diagram represents this layer by showing an array of different devices resting on top of the Java API, each of which corresponds to the custom endpoints for a given UI and/or set of devices. The custom endpoints, as mentioned earlier, support optimized request/response handling for that device, which takes us to the next philosophy…

Redefine the Border Between “Client” and “Server”

The traditional definition of “client code” is all code that lives on a given device or UI. “Server code” is typically defined as the code that resides on the server. The divide between the two is the network border. This is often the case for REST APIs and that border is where the contract between the API provider and API consumer is engaged, as was the case for Netflix’s REST API, as shown below:

In our new approach, we are pushing this border back to the server, and with it goes a substantial portion of the UI-specific content processing. All of the code on the device is still considered client code, but some client code now resides on the server. In essence, the client code on the device makes a network call back to a dedicated client adapter that resides on the server behind the custom endpoint. Once back on the server, the adapter (currently written in Groovy) explodes that request out to a series of server-side calls that get the corresponding content (in some cases, roughly the same rainbow of requests that would be handled across HTTP in our old REST API). At that point, the Java APIs perform their content gathering functions and deliver the requested content back to the adapter. Once the adapter has some or all of its content, the adapter processes it for delivery, which includes pruning out unwanted fields, error handling and retries, formatting the response, and delivering the document header and body. All of this processing is custom to the specific UI. This new definition of client/server is represented in the following diagram:

There are two major aspects to this change. First, it allows for more efficient interactions between the device and the server since most calls that otherwise would be going across the network can be handled on the server. Of course, network calls are the most expensive part of the transaction, so reducing the number of network requests improves performance, in some cases by several seconds. The second key component leads us to the final (and perhaps most important) philosophy to this approach, which is the distribution of the work for building out the optimized adapters.

Distribute Innovation

One expected critique with this approach is that as we add more devices and build more UIs for A/B and multivariate tests, there will undoubtedly be myriad adapters needed to support all of these distinct request profiles. How can we innovate rapidly and support such a diverse (and growing) set of interactions? It is critical for us to support the custom adapters, but it is equally important for us to maintain a high rate of innovation across these UIs and devices.

As described above, pushing some of the client code back to the servers and providing custom endpoints gives us the opportunity to distribute the API development to the UI teams. We are able to do this because the consumers of this private API are the Netflix UI and device teams. Given that the UI teams can create and modify their own adapter code (potentially without any intervention or involvement from the API team), they can be much more nimble in their development. In other words, as long as the content is available in the Java API, the UI teams can change the code that lives on the device to support the user experience and at the same time change the adapter code to deliver the payload needed for that experience. They are no longer bound by server teams dictating the rules and/or being a bottleneck for their development. API innovation is now in the hands of the UI teams! Moreover, because these adapters are isolated from each other, this approach also diminishes the risk of harming other device implementations with tactical changes in their device-specific APIs.

Of course, one drawback to this is that UI teams are often more skilled in technologies like HTML5, CSS3, JavaScript, etc. In this system, they now need to learn server-side technologies and techniques. So far, however, this has been a relatively small issue, especially since our engineering culture is to hire very strong, senior-level engineers who are adaptable, curious and passionate about learning and implementing these kinds of solutions. Another concern is that because the UI teams are implementing server-side adapters, they have the potential to bring down the servers through infinite loops or other processes that are resource intensive. To offset this, we are working on scrubbing engines that will hopefully minimize the likelihood of such mistakes. That said, in the OSFA world, code on the device can just as easily DDOS the server, it is just potentially a bigger problem if it runs on the server.

Example of how this new system works:

  1. A device, such as the PS3, makes a single request across the network to load the home screen (This code is written and supported by the PS3 UI team.
  2. A Groovy adapter receives and parses the PS3 request (PS3 UI team)
  3. The adapter explodes that one request into many requests that call the Java API to (PS3 UI team)
  4. Each Java API calls back to a dependent service, concurrently when appropriate, to gather the content needed for that sub-request (API team)
  5. In the Java API, if a dependent service unavailable or returns a 4xx or 5xx, the Java API returns a fallback and/or an error code to the adapter (API team)
  6. Successful Java API transactions then return the content back to the adapter when each thread has completed (API team)
  7. The adapter can handle the responses from each thread progressively or all together, depending on how the UI team wants to handle it (PS3 UI team)
  8. The adapter then manipulates the content, retrieves the wanted (and prunes out the unwanted) elements, handle errors, etc. (PS3 UI team)
  9. The adapter formats the response in preparation for delivery back across the network to the PS3, which includes everything needed for the PS3 home screen in the single payload (PS3 UI team)
  10. The adapter finally handles the delivery of the payload across the network (PS3 UI team)
  11. The device will then parse this optimized response and populate the UI (PS3 UI team)

We are still in the early stages of this new system. Some of our devices have fully migrated over to it, others are split between it and the REST API, and others are just getting their feet wet. In upcoming posts, we will share more about the deeper technical aspects of the system, including the way we handle concurrency, how we manage the adapters, the interaction between the adapters and the Java API, our Groovy implementation, error handling, etc. We will also continue to share the evolution of this system as we learn more about it.

How to Make Money With Your API

comments Comments Off on How to Make Money With Your API
By , September 5, 2023 9:18 pm

This post (originally appearing in ProgrammableWeb.com, which is now defunct) comes from Daniel Jacobson, Director of Application Development for NPR. Daniel leads NPR’s content management solutions, is the creator of the NPR API and is a frequent contributor to the Inside NPR.org blog.

One of the questions that I am most frequently asked regarding content APIs is “how can I make money with my API?” Before answering that question, however, it is important to ask for whom the API is designed. After all, the audiences for your API will determine what business opportunities exist.

The most common target audience for APIs is the developer community. While that audience is an interesting and potentially important one, it is not where the greatest value can be realized.

When we launched the NPR API in 2008, we established four target audiences, each of which were important. The target audiences were (and still are):

  • NPR: NPR is of highest importance because as we build all of our systems, mobile apps, etc., it was important to be as nimble and efficient as possible. We have adopted this so deeply that the API is the foundation of everything that we do, including acting as the content source to NPR.org.
  • NPR member stations: NPR member stations are a critical aspect of the NPR mission and business model. Offering the stations a new, more effective way to get NPR content in a robust way better serves the stations and their communities, as well as NPR.
  • NPR partners: Having the API quickly became a more effective way to interact with content aggregators, business partners and other commercial entities with whom we established relationships. In fact, the API became a business development tool where some external organizations approached us because we had a robust API.
  • the general public: Finally, as part of our public service mission, it was and is important for NPR to share our content with the world. Exposing it to the developer community is a natural extension of this effort. But when we launched the API, we fully expected this to be where true innovation took place with the API. In fact, the day after our launched, I told CNet that the community of “developers will come up with a lot of brilliant ideas.”

With the API live for a full two years, I decided to look more closely at how effectively the API has been serving these four audiences. Although I am not surprised by the results, you may be…

The following charts show the distribution of how many API keys are registered by each of our four audiences. That metric is then compared to the consumption of the API (as measured by API requests) by the four audiences:

Obviously, there are many more API keys registered to the general public than the others. In fact, our API currently has over 10 times more public keys than all other keys in the system combined.

Despite the disparity between public keys and those used by other audiences, the dominant group from a request perspective is overwhelmingly NPR, responsible for more than 92% of the total number of requests. That means that the remaining 8% of requests are coming from all three other target audiences combined.

When considering this distribution in requests by audience relative to the key distribution by audience, it is clear that NPR has by far been the most effective user of the API. So, given the incredible amount of consumption by NPR, how has that translated into revenue opportunities? Below is a chart detailing the growth in total page views across all NPR platforms over a twelve-month span:

By the end of the twelve months, NPR’s total page view growth has increased by more than 100%. How were we able to add that many page views in such a short amount of time? The API. Not directly. But the API did enable NPR product owners to quickly, efficiently and independently build specialized apps in various new platforms. As a result, what we have seen is primarily additive growth. In other words, in addition to NPR.org’s growth (by about 19%), we have been able to add the NPR News iPhone app, the improved mobile site, the Android app, the iPad app, etc., each of which adds page views. From our analysis, adding these new platforms is generating new traffic and is not cannibalizing page views from NPR.org in a substantive way. These new page views create new sponsorship/advertising inventory that create new revenue opportunities.

So, when asked the question “how can I make money with my content API?”, the answer should always be based on your target audiences. And from NPR’s experience, the best way to make money is to focus on how the API can improve your internal processes. Of course, it is still important to maintain a solid support and growth model for the other audiences as well, but we cannot all be Google, Netflix, Twitter, etc. Unless you are planning to spend a lot of money on community engagement, you are better served by making sure you can liberate your product owners and grow your business more quickly, efficiently and independently.

In other words, don’t assume that the API’s primary audience is the developer community. Question that default position and do the introspection that will enable you to get the maximum value out of your API.

Engineering spirals: 10 philosophies to facilitate innovation

comments Comments Off on Engineering spirals: 10 philosophies to facilitate innovation
By , September 5, 2023 9:16 am

This article was first published on The Next Web on March 25, 2014

Engineering spirals: 10 philosophies to facilitate innovation

Daniel Jacobson (LinkedIn) is the VP of Edge Engineering for the Netflix API. Prior to Netflix, Daniel ran application development for NPR where, among other things, he created the NPR API. He is also the co-author of APIs: A Strategy Guide.

“Get busy living, or get busy dying” – Shawshank Redemption

Building great engineering teams is difficult, but it is also increasingly important as the world in which we live is more than ever driven by software. Because of this growing importance, it is essential for engineering leaders to maintain a culture of innovation within their teams to ensure high performance and to keep the company ahead of the curve.

In high performance cultures like at Netflix, there are basically two outcomes that will play out over time for engineering teams. Either the team will enjoy an upward spiral established by a strong culture of innovation or it will spiral in the downward direction, resulting in an inevitable decay of the team and its products.

Here are my experiences as an engineering leader and how I’ve worked to build a culture around innovation for my teams, virtually at all costs.

The downward spiral

For most engineering teams, it is easy to enter a steady state of development and maintenance as systems get off the ground and mature.

Accordingly, managers often slow or halt hiring as the amount of work is relatively well-understood. As a result, the engineers on the team enter a daily or weekly (or perhaps monthly) ritual of incremental improvements, responding to requests, and fixing bugs.

As engineers churn through task lists, however, they become bored, uninspired, and complacent, resulting in degradation in velocity and/or quality. That degradation will result in more churn around testing and/or support issues, which will further frustrate and bore the engineers while generating more potential for system failures that will increase the churn.

The more churn, the more turnover in staff; the more turnover in staff, the more additional churn. This downward spiral can play out very quickly or it can take quite a while.

In either case, there is a clear direction, it is inevitable, and it has a bad ending.

Upward spiral

The way out of the downward spiral is to make some very difficult decisions that have short-term ramifications for the benefit of the long term. I call this “taking your lumps.”

If you take your lumps now by deferring non-essential work, it frees the team up to think about the long-term and to seek patterns in their work, systems, and operations. Through these patterns, the team can potentially program away a class of work that otherwise would occupy the team’s time on an ongoing basis.

Eliminating a class of work enables the team have more available time in the future to seek other such patterns or opportunities, which will create even more available time.

With the available time, not only is the team further alleviated from the daily churn of reacting to external needs, they are also able to pursue higher order projects that allow the team to make transformative leaps forward rather than churning to keep up or making minor incremental improvements.

Collaborative team

Repeated enough, this will eventually become part of the team’s culture, resulting in higher quality work and greater velocity. Unlike the downward spiral, there will positivity around the team that will be infectious and will create a breeding ground for attracting new talent.

Virtually every engineering team will find itself in one of the two aforementioned trajectories. It might not be obvious which way things are headed, but there will be a trend one way or the other.

It is the job of the engineering leader to ensure that the spiral is upward. Here are my 10 philosophies and approaches that I employ with my teams to strive for the upward trajectory:

1. Establish a strong identity

Be very clear on the identity of the team and establish a set of philosophies against which the team can operate. Be stubborn about adhering to the identity. The more that identity gets compromised by one-off requests, the more the architecture weakens, the more churn the team will have to deal with, and the more likely morale will suffer.

Be clear on what you will and won’t do and make sure the team knows these boundaries, lives them, and communicates them to others.

2. Important vs. Urgent

In “The 7 Habits of Highly Effective People,” Stephen R. Covey talks about the difference between urgent and important. Engineering organizations can very easily fall into the trap of being highly reactionary to externally imposed requests.

While many of these externally imposed requests are very important (and in fact, even if they are not), they tend to team’s attention as both urgent and important. But there are many other tasks or efforts that are very important despite the fact that they are internally driven and elective.

Understanding this distinction and being able to distinguish which tasks fall into which category is paramount in getting out of the churn and enabling that first critical step: introspection.

3. Introspection

Introspection is the key to innovation. Handling requests from a range of external (or even internal) stakeholders is the natural, easy thing for a team to do. Taking a step back from those requests and looking for patterns across them while imagining what they might look like in the future will give a broader and more impactful perspective.

If the system gets refactored in some other way, will that eliminate a class of requests in the future?  Given how the industry is evolving, can you anticipate weaknesses in the system’s architecture that should be examined now? These are examples of important questions that can help springboard your team out of their everyday churn of satisfying urgent requests.

4. Don’t throw good money at bad

During the introspection process, it is important to be future-oriented. Your team has a lot of functioning code and other system-oriented assets which should be considered.

That said, they should only be considered after evaluating the long-term needs of the team and its relationship to its constituents. Imagine starting from scratch and target that as your outcome. From there, it is much easier to see how, if at all, existing assets can play a role in that future state (or in the transition to get there).

5. Hire beyond your needs

job interview

The most important resource to enable introspection is time. Many companies and hiring managers work towards “right-sizing” their teams. That is, they project what the incoming requests will be for the team and attempt to staff the team based on those expectations.

This is perhaps the biggest flaw that a team manager can make when building and operating an innovation team because that will ultimately limit the amount of available time for introspection.

Instead, hiring managers should staff beyond the bandwidth needed for known tasks. This will give the team the ability to swell and contract its focus on such work while continually maintaining a reasonable amount of time towards introspection and innovation.

6. Great engineers NEED to be challenged

If staffing is such that your great engineers are spending the majority of their time handling very tactical work, they will slowly but surely lose interest in the job and eventually leave.

Of course, doing that kind of work is a necessary part of every engineering job, but there needs to be a balance for great engineers to remain happy and excited about their work. Engineers need to also have deep architectural challenges that allow them to think, to stretch their minds, and to have a greater value to the company than just keeping the lights on.

In fact, most of them want to have the freedom to identify and pursue these challenges in a way that help them feel empowered and impactful. That is why engineers get into this field in the first place and if that is not available in their current job for too long, they will find those opportunities elsewhere.

7. Instill a culture of (good) laziness

There are two kinds of “lazy” in engineering: bad laziness and good laziness. Bad laziness is allowing yourself to repeat the same tasks over and over because that is easier than stepping back, looking for patterns, and spending the up-front time to program those tasks away. Manual deployment pipelines or manual tests are great examples. But ultimately, if a human can do it, a computer can (and should) do it too.

This is where good laziness comes in. Great engineers will ultimately be fed up with the arduous nature of the repeated task and seek to eliminate that work from his/her docket.

8. Innovation breeds innovation

Once an initial innovation occurs that liberates the team from some encumbering set of repeated tasks, the team now has some newly available time. That time can be used in any number of ways, but to maximize its utility the team should use that time for even more introspection which paves the way for the upward spiral.

The more such innovations that the team can yield, the more likely the team can yield more innovations. This is the case, not only because of the growth in available time, but also because it eventually becomes part of the team’s culture.

9. Don’t treat your systems like your baby

Many people in the engineering world grow very attached to the systems that they build. It is easy to establish that loyalty as engineers spend a lot of time working on a specific system. In fact, I have often heard people call their systems their baby (I may have been guilty of that in my past as well).

There is a value in growing so attached to the systems in that is does strengthen the bond and builds pride for the team as they strive for excellence with that system. That said, there is a long-term detriment to this as well.

Systems, like virtually any piece of technology, have a limited shelf life. At some point, the system will hit its limit and will need to be overhauled or replaced.

Loyalty to that system clouds one’s objectivity about what is best. We need to be able to treat our solutions as tactics towards a broader goal and if the tactic is no longer effective we need to abandon it.

10. There’s no such thing as maintenance mode

api modeling

If a system is to go into maintenance mode, it really means one of two things: It is either not an important system anymore (which begs the question as to whether or not it should just be retired outright) or the business function is still important to the company even though the company no longer wants to invest in the system that supports it.

As part of the team’s culture, it is important to aspire to eliminate the idea of maintenance mode from the team’s vernacular.

Maintenance mode has two main detriments. First, it adversely affects the team’s morale and goes against the spirit of great engineers, which is to constantly be challenged. Second, most maintenance systems conflate the idea of supporting a legacy system with supporting its business function.

In fact, the latter is the real goal and an innovative team will seek ways to retire legacy systems in favor of future-oriented systems that still supports the required business function. This is not always easy or feasible, but you should always be seeking opportunities to move on from the legacy system.  Sometimes executing on that migration work is of equal or greater value to pursuing new innovations.

External risks

Ultimately, all of these principles depend on having excellent talent on the team. No amount of leadership can offset the challenges introduced by having the wrong skills or people.

Another risk is that many engineers like to chase the shiny new objects. There is a balance that needs to be maintained between enabling great engineers to experiment, innovate, and identify and pursue challenges with their propensity to play with emerging technologies.

It is also worth noting that there are often external forces that prevent some organizations and/or leaders from achieving the above philosophies. For example, not all companies have enough available resources to staff beyond the needs or they may have a legacy of disparate and unrelated technologies that make it inherently more difficult to find a path out of the churn.

As a result, these philosophies require a strong company-level culture that puts leaders and teams in a position to achieve greatness. If the culture is there, however, these 10 philosophies, if truly embraced, will help springboard your team to being innovative and non-reactionary.

Review of Rush’s 40th Anniversary Show – San Jose

comments Comments Off on Review of Rush’s 40th Anniversary Show – San Jose
By , July 27, 2015 1:16 pm

Rush in San Jose

Rush in San Jose

As my musical tastes have evolved over the years, one group has pervaded throughout: Rush.  My interest to them grew in parallel with my drumming, at least in my formative drumming years, and that attachment has persisted.  That said, as much as I love Rush, it has been a long time (Roll the Bones tour in 1991) since my last Rush show.  In fact, I didn’t really anticipate seeing them ever again, mainly because I tend to see live music at smaller, more intimate venues, but also because I just haven’t connected with anything they have done in the past 20 years (since Counterparts).  But this tour felt different to me.  This one, I needed to go to.  The Rush 40th Anniversary Tour is likely their last and one that celebrates an incredible run by this quietly influential band.  Moreover, their concert was going to cycle backwards through their copious collection of songs (roughly 170 in total, depending on how you count some of them), touching on Rush-nerd classics such as Natural Science and Jacob’s Ladder.  Again, this felt like a must attend event.

Show Duration

First of all, I do want to highlight the fact that the show had no opening band and lasted over three hours from start to finish.  Not bad for a trio of 60+ year olds, huh?

The show was scheduled to start at 7:30pm.  It actually kicked off at 7:45pm.  There was a fever pitch in the arena starting at about 7:20pm, lasting all the way until the opening video started off the show.  Any time the lights dimmed a little during the time leading up to the start of the show, the crowd noise would swell with anticipation, only to settle back down until the next false alarm.

Once they started, the first set lasted a little more than an hour, during which they played 10 songs, spanning the albums Clockwork Angels through Signals.  The band then took a “short break to rejuvinate” lasting about 20 minutes or so, after which they played another 16 songs (keep in mind that some of these songs lasted 10 or more minutes).  The show closed at 10:50pm.

Stage Design

 

Simple Stage Design for R40

Simple Stage Design for R40

Unlike previous Rush concerts that I have been to, this stage design was super clean and simple.  There we no big hats with rabbits in them, for example, bouncing around the stage at various points in time.  Most of the stage floor was open and flat, with just a big yellow R40 in the middle near the front.  There were some simple stage affects (e.g. popcorn machine, laundry machines, fake Marshall stacks, etc.) to either side of the drum set that we bordering on goofy, but I found them largely unoffensive.  It was also a bit entertaining to see the guys in orange R40 suits walking across the stage in the open to shift around those affects during the song.  I also appreciated the apparent obliviousness of the band members to these moments which were clearly designed to be there.

The biggest offense in the stage props came in the form of two crew members dressed in a horse suit who then marched across the stage holding a sign that read “Hey”, begging the audience to cheer the word in those empty spaces towards the end of the 2112 Overture.  I mean, seriously guys, this is your 40th anniversary tour.  The place is filled with Rush geeks who know what you expect.  A goofy horse is not necessary…

Behind the drum kit was a large screen flanked by two tall and narrow screens.  These screens were used to present a range of videos between and during songs.  More on this later, but I think it was classy, out of the way of the actual performance, and generally enhancing to the show.

Energy

This was the show of two sets.  In the first set, I will sum it up by saying that I felt like Geddy was thrilled to be there, Neil was thrilled to just be playing the drums, and Alex was just thrilled to be alive.

To be more specific, Geddy was bouncing around the stage, deftly managing his significant workload while occasionally doing his awkward duck walk.  His enthusiasm definitely could not have been mistaken.  Meanwhile, Neil was behind the kit with a typical stoic expression.  While he played pretty proficiently throughout the night, he just seemed hunched over the whole time like this tour was weighing him down.  I got the sense that he would be more happy just playing the drums back in their garage in Toronto.  Alex, well, he was very stationary for the first set and there was little evidence that he was even awake.  It really did feel like a two person band for most of the first set.  This is particularly surprising given that Alex is the youngest of the three – he should have the most energy.

Alex looking lifeless

Alex looking lifeless
(courtesy of Jeff Butsch)

Between sets, however, they took a 20-minute break.  I have no idea what they did back stage, but Alex was a different player afterwards.  Maybe he got an injection or perhaps he was uninspired by the more recent material (similar to the sentiment of the majority of the crowd, I presume)…  Whatever it was, the second set put the “life” back into Lifeson.  No change in energy or performance for the other guys, for the most part.

Neil

Neil definitely showed his age.  Don’t get me wrong, he is still a great rock drummer and he generally played well, showing the poise of someone who has nearly 50 years experience on his instrument.  I have two major critiques of him in this show though.  First, and more fundamentally, he continues to play most of the parts as he crafted them on the albums.  Granted, the parts are well-constructed.  But he has the ability and creativity to improvise or vary things up more.  It would be great to see the fills in Tom Sawyer and YYZ, for example, played differently than their first incarnation nearly 35 years ago.  The second issue is that he simply made a bunch of mistakes during the show.  I don’t mean mistakes like he played the wrong part.  I mean, he would occasionally stumble and miss the one.  The most obvious occasion of such a misstep was in the triplet-based fill leading into the guitar solo of Tom Sawyer – he must have played this a million times correctly in the past, yet this was a big flub.  Geddy and Alex played on and Neil caught up again, so no real harm.  I note this because it is really uncharacteristic of an otherwise very precise player repeating the parts he has always played.  To be clear, this didn’t really adversely affect my enjoyment of the show.  I just noted it as an oddity, one that is likely a result of an aging player who has been touring on and off for 40+ years.

Neil played a mini-solo in the first set.  He played a larger one in the second set which split the uprights between the beginning and the end of Cygnus X-1 (with the middle of the song getting excised completely).  I don’t have much to say about the solos aside from the fact that he played them well and that I am a little disappointed that the majority of both solos consist of parts that I have heard many times before.  There were definitely nuggets in both that were exceptional though.

Instruments

Neil and his drum set

Neil and his drum set
(courtesy of Jeff Butsch)

I loved Neil’s drum kit choices!  He ditched the circular drum set with all of the extraneous pieces in favor of his older and more traditional configurations.  Of course, the different eras have different percussive needs, so in exchange for the circulating kit Neil had his crew swap out kits during breaks so he can have the appropriate apparatus based on where the set was in the 40-year history.  For example, the kit he played after the big break included the chimes and bells for Xanadu and Closer to the Heart.  It was refreshing to see him get back to some of the roots and to match the simplicity of the stage design with the simplicity in his drum sets (with “simplicity in his drum sets” being a relative phrase).

Geddy with four instruments (vocals, keys, bass, guitar)

Geddy with four instruments: vocals, keys, bass, guitar
(courtesy of Jeff Butsch)

The other really noteworthy detail about the instruments is that both Geddy and Alex pulled out their double-neck guitars for Xanadu.  Those things look great!

Song Choice

First, coming up with a set list for these shows must have been a very difficult task to take roughly 170 songs or so and boiling them down to a set list that would cover about two hours and forty five minutes worth of material.  There are the obvious songs that you would expect, like Spirit of Radio, Closer to the Heart, Tom Sawyer, and Subdivisions.  But I was thrilled that they pulled out some others that cater to the more committed fan, like 2112 and part of Hemispheres.  Songs that surprised me that they didn’t play include Limelight, Show Don’t Tell, and The Trees, I would have gladly forsaken any of these in favor of La Villa Strangiato, Vital Signs, and By-Tor.  All in all, a reasonable set list given the concept they were shooting for.  That said, if I were them, I probably would have trimmed the bookends and had the entire set be material from between Fly By Night and Roll the Bones, giving them more time to fill out the set with their stronger material.  Again, just my opinion.

Video Interludes

There were a bunch of videos that they played throughout the show.  Overall, I found them to be a lot of fun and a good break from the music.  It was great to see the cameos from a range of comedians like Paul Rudd, Jason Segal, Eugene Levy and Jerry Stiller.  I like how they didn’t take themselves too serious during the interludes – they were able to laugh at themselves a bit.  Good stuff.  The videos during the music were also interesting at times, but I found myself not paying attention to them during the actual performance.  Maybe this is because I was sitting off to the side more.


All in all, this was a great show.  With the ticket prices what they were, I feel like I did my part in sending this under-appreciated band out to pasture… but I don’t feel bad about spending that money at all.  The material is as great as ever and they played it well and with dignity.

Thanks for all the memories, Rush!

 

The future of API design: The orchestration layer

comments Comments Off on The future of API design: The orchestration layer
By , January 18, 2014 9:24 pm

The digital world is expanding at an amazing rate, giving us access to applications and content on myriad connected devices in your homes, offices, cars, pockets and on even on your body. The glue that allows all of this to happen, that connect the companies who provide these services to the devices that you use, is the API.

Because APIs have such a huge responsibility for so many people and companies, it is natural that API design is often one of the industry’s liveliest discussions, touching on a range of topics including resource modeling, payload format, how to version the system, and security.

While these are likely important areas to explore when designing virtually any API, the reality is that a much larger decision needs to be made first. That decision is based on a fundamental question: who are the primary audiences for this API and how can we optimize for those audiences?

This seems like an innocuous enough question, but don’t underestimate its importance or complexity in the growing world of APIs.

Years ago, this question was much simpler

At that time, many emerging APIs were being built as open or public APIs, targeting a large set of unknown developers (LSUDs) external to the providing company.

Because of the (hopefully) vast numbers of external developers using the API representing different use cases, the most sensible way to design the API for this audience is to have the providing API team design it in a very clean, concise, and resource-oriented way that closely represents the data model and/or features of its source(s).

In a previous post, I referred to these as OSFA APIs (or one-size-fits-all APIs). Allowing for such granularity in the modeling means that any developer who wishes to use the API can mix and match the elements in whatever way they choose to satisfy their application without further API team involvement.

The resource-model approach to designing an API can be very powerful, especially for this type of audience. The problem with this approach, however, is that the way that many companies use APIs today is different than described above. While many are still supporting the use cases of LSUDs, more are using their APIs to support a growing mobile or device strategy.

For some of these implementations, the engagement with the developers is different. The audience is a small set of known developers (SSKDs). They may be engineers down the hall from the API team, a contracted company hired to develop an iPhone app, or an engineering team in a partnering company. In all of these cases, however, the API team knows who these people are (at least in the abstract sense).

More importantly, however, the API team and the providing company care about the success of these implementations in a different way than they might care about the applications developed by the LSUDs. In fact, the success of the SSKDs may very well be paramount to the success of the business as a whole, a model that is becoming increasingly more pervasive.

Because of this change in audience and the deep interest in their success, there is great opportunity to change the API design.

For the SSKDs, having granular resource-based APIs that closely represent the data model works, but it just isn’t as optimal as it could be. This is especially the case when you consider the growing number of device types in the world and the fact that more and more companies’ business strategies are dependent on providing value to customers on such devices.

So, all it takes is a couple of devices with diverging needs and/or capabilities, each of great import to the company, for the resource-based API to start to show some warts. Making the API better, more optimized, for each of these target applications is the next logical, and most critical, step.

Enter the Orchestration Layer

An API Orchestration Layer (OL) is an abstraction layer that takes generically-modeled data elements and/or features and prepares them in a more specific way for a targeted developer or application

To address this opportunity, more companies are employing orchestration layers into their API infrastructure. While there are many ways in which to implement this architectural construct, the concept remains the same across all of them.

Below, I will describe a few of the more common patterns that I have seen (and/or been involved in implementing). But first, here are a few key principles that need to be considered when building an OL:

1. Most APIs are designed by the API provider with the goal of maintaining data model purity. When building an OL, be prepared to sometimes abandon purity in favor of optimizations and/or performance.

2. Many APIs are designed by API teams to make it easier for the API team to support. When building an OL, be prepared to potentially add complexity for the API team (or other teams, depending on the way it is implemented).

While this sounds undesirable, the goal here is to dramatically improve efficiency and/or simplicity for other people at some mild cost to the API team. Also keep in mind that such costs can potentially be programmed away over time.

3. It is important to understand the breadth of the audiences for the API.Depending on those constituents, you may only need the OL. In other cases, you may need the OSFA foundation in addition to the OL.

Here are a few examples of how some OLs have been approached:

Device-specific wrappers

This is the most common pattern that I have seen because most companies that are experiencing the distress referenced above already have APIs that they still use, continue to support, and invest in. The result is to continue to offer the granular resources as they always have, but to offer a wrapper tier on top of them – with new endpoints that are tailored to specific developers, devices or device clusters.

In this model, the API team will work more closely with, for example, the iPhone team to write a custom wrapper that handles specific requests and deliver specific payloads that are optimized for the iPhone app. In this model, most often the team to build the endpoints and the wrappers is the API team although that doesn’t have to be the case.

Query-based APIs

In this model, the API team is putting the power in the hands of the requesting developer, although that power is limited. The goal here is to create a more flexible way in which the requester can make requests and tailor payloads without putting additional ongoing burden on the API team, as could be the case with the Device-Specific Wrappers.

This is achieved by breaking down the resource-based APIs and allowing them to be queried against like a database through flexible parameters and payloads that can contract, expand and possibly morph based on what is needed. The benefit here is that once the query language is set, the API team does not need to keep writing wrappers as new implementations are needed for different devices.

The detriment, however, is that the query-based API is still a set of rules to which the developer needs to adhere, although these rules are much more flexible than the resource-based API model.

Experienced-based APIs

resource v experience apis The future of API design: The orchestration layer

This is the model that Netflix has implemented, which in some ways is a blend of the two above. In this model, we basically have device-specific wrappers but they are designed, implemented and owned by the device teams.

A key concept here is that we have put the API team in the position of gathering the data in a generic, reusable way while putting the device teams in the position of owning the data formatting and delivery. After all, the formatting needs evolve in concert with the UI changes so putting that effort in the hands of those closest to the changes eliminates additional steps.

(For more details on how this system operates, see the links at the bottom of the post.)

As I noted, the range of implementations is potentially much more diverse than these three, although these are some of the most consistent and interesting patterns that I have seen. Regardless of how this is achieved, however, the key is for the API team to stop supporting the API as a service that is designed independent of those SSKDs who consume it.

Rather, the API team needs to view the SSKDs as partners in the design with an interest in making the products as great as possible so the end-users can get the best experience possible. The API team has the opportunity to build services that help developers to be better at developing by focusing on optimizing for the developers’ needs rather than how to optimize the time spent supporting the API.

Given the opportunity ahead with the potential number and diversity of connected devices, the effort to provide such optimizations is a small price to pay for the massive upside.

Why You Probably Don’t Need an API Strategy

comments Comments Off on Why You Probably Don’t Need an API Strategy
By , November 9, 2013 1:04 pm

“Strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat.” – Sun Tzu, The Art of War

Over the course of 2013, the API industry has matured a great deal. Not only have we seen many of the major vendors (ie. ApigeeMashery3ScaleLayer7, etc) get acquired and/or receive large rounds of funding, we are also seeing an uptick in new players, new tools and services, new publications, and even a series of API-focused conferences.

Meanwhile, according to a recent survey by Layer7, more than 85% of companies expect to have an “API program” within the next five years. All of this is evidence that the appetite for tools and information about APIs is robust. Accordingly, there is no shortage of people and companies attempting to satisfy that hunger. The question I have amidst this growth, however, is whether the concepts around API strategies being served by some is the right meal for those who are eager to feast on APIs.

The problem: “API Strategy”

The majority of the non-technical conversations in the API industry seem to be focusing on terms like “API strategy” and “API economy.” In fact, I even co-wrote a book called APIs: A Strategy Guide a couple of years ago, further facilitating the use of those words in the API vernacular. There is absolutely a strong case to be made for needing an API strategy for certain situations. But how many companies should really be thinking about their API in that way?

Before continuing, it is worth being clear on what I mean by the terms “strategy” and “tactic”. Bobby Ghoshal puts it nicely in his post, Greeks Gave us Strategy vs. Tactics: Now Understand the Difference where he says, “A strategy is a grand plan, a tactic is a specific measure implemented to push the grand plan forward.” Applied to APIs, if there is an API strategy then it means that API is the product in-and-of-itself. In other words, the API is the target of a distinct business and opportunity (with its own metrics), which will then have a range of tactics to support it.

There are certainly cases where APIs are businesses and where a strategy is appropriate. The most common example of an API strategy is around companies who aspire to build a developer community as a new revenue source or as the foundation of their business. Twilio is an interesting example of such a company. Twilio’s strategy is to offer APIs that tap into their backend services to allow developers to build apps supporting their communication initiatives.

In this case, the API is a strategy, one that is fundamental to the business as a whole. Accordingly, Twilio invests heavily in the API, supporting documentation for it, fostering the developer community, and all of the other things one would expect such a company to do for their public API (and some would suggest that they do this as well as or better than anyone else). Twilio should invest heavily in this — a significant portion of the opportunity is predicated on the success of the API program.

The reality: “API as a tactic”

But most companies should not be trying to set up distinct businesses with their APIs as the focal point. They should not be trying to generate new revenue streams or reach new audiences through such programs. Instead, most companies should be focusing on their core business and then designing APIs that support larger strategy.

In pursuing that route, most companies should not be discussing their “API Strategy,” they should be talking about their API as a tactic in support of their broader business strategy and objectives.

An example that I am very close to is Netflix. In the early days of the Netflix API when the program was targeted exclusively to public developers, the API had its own metrics and its own objectives, all of which were designed to support the primary goals of the company.

In this sense, the Netflix API was a product designed to offer incentives to developers to motivate them to build applications around the Netflix experience. These applications would hopefully reach new audiences to generate new subscribers and/or create new user experiences for existing subscribers that would increase their satisfaction with our service. Although the API was treated as a new product within Netflix, it was still operating under the company’s larger business objectives.

While the original vision was incrementally valuable to the company, the results were not as transformative as originally expected. As a result, we pursued a new approach with the API, using it to drive the larger strategy of device proliferation for our growing streaming business. In this sense, the API was transitioning from a product to a tactic.

Today, Netflix can be watched on more than 1,000 different device types, the vast majority of which are developed by Netflix-employed UI Engineers. The API served as an excellent engineering tactic that allowed us to quickly get on more devices, which in turn allows us to create a better overall experience for our customers.

More changes have since been made with the API. Most recently, the Netflix API team, which used to provide traditional REST APIs to the Netflix UI teams, is now providing content distribution platforms that enable data to be pushed from our AWS backend systems to the devices in people’s homes and pockets. We are no longer truly an API team, we are a team that embraces the differences of the different devices and empowers the UI teams to customize and optimize the request/response models needed for their specific device. In other words, we are now a platform for API development.

All of these pivots within Netflix further demonstrate that our API is nothing more than a tactic to achieve our broader goals. There are no allegiances to a tactical solution. Tactics can (and should) be modified, discarded and replaced as appropriate. Strategies, on the other hand, should have longer shelf-lives, evolving over time but less frequently overhauled.

The majority of companies that are considering API implementations, based on my conversations and experience in the industry, are more like Netflix than Twilio. There are countless examples of companies who have made similar pivots to refocus their API attention towards supporting the company’s primary business objectives. These examples range from media companies (NPR, The New York Times, The Guardian) to financial institutions (PayPal, E-Trade) to social media sites (Twitter, LinkedIn).

Even service companies like Amazon and Salesforce, whose systems are differentiated in part by their APIs, use them as a tactic to provide increased value for the primary business, which is providing robust services supporting cloud computing and CRM respectively.

The bottom line

The key to a successful API program is to know your audience. Your audience is defined by your business opportunity. So, be very thoughtful about the opportunity and then define your API accordingly. In some cases, pursuing the public developer opportunity is absolutely the right thing to do and it may have a tremendous upside (although realizing that upside is quite rare).

However, if your opportunity is truly to support a broader business objective, then launching a public developer program is not likely to yield large dividends. It is more likely to come with increased costs and risks that will weigh down your returns, dilute your resources for the larger opportunity, and distract you from the real prize. Instead, focus all of your energy on building a great system that helps you optimize for the larger goal. And don’t be married to that system as it is nothing more than a tactic.

Ultimately, if you know your audience, you can define a strategy and then design the right tactics. Otherwise, brace yourself for a very slow route to victory or, more likely, a noisy defeat.

This post originally appeared on The Next Web on September 15, 2013.

 

Netflix Does Not Have an Unlimited Vacation Policy

comments Comments Off on Netflix Does Not Have an Unlimited Vacation Policy
By , October 28, 2013 1:02 pm

I am finding it increasingly common, when talking about working at Netflix, for people to tell me that their companies are embracing an unlimited vacation policy similar to the one Netflix has been employing for years. When I hear these kinds of statements, my first thought is that these programs are not likely to have the desired outcome. I think that because Netflix does not have an unlimited vacation policy. Rather, Netflix has a culture of “Freedom and Responsibility” (F&R), which is discussed extensively in our publicly available culture slides.

The culture of F&R extends much further than its application to vacations, sick days, and holidays. F&R is a mindset on how we treat each other and our jobs on a day-to-day basis. It is a simple mantra that basically states that we are all adults, so let’s all behave and treat each other like adults. And if a person is not consistently behaving like an adult, they should not work at Netflix.

So, what does it mean to “behave like an adult”? At the core, Netflix seeks out senior-level people who are seasoned, both in their primary competency as well as in how to work effectively with others. We expect people to live up to the values that we discuss in the culture slides, to be people on which we would want to trust the future of our business. In fact, we are trusting the future of the business on each and every person who works with us. And that is the central point of it all… trust. If we cannot trust you, you should not be working at Netflix. Period.

This kind of trust takes many shapes. One example is that there are no strict silos in our server access rights. Virtually every engineer at the company can easily get root-level access to virtually any system in the production environment. Another manifestation is the fact that virtually everyone in the company knows key strategic initiatives that we are focused on well before these initiatives go public. As a result, everyone in the company is subject to trading windows for our stock options. That level of openness, at a company the size of Netflix, is exceedingly rare. And we can do all of this because we hire (and fire), essentially, for that deep level of trust.

With respect to vacation, that is just another manifestation of our trust as it pertains to F&R. We trust our employees and peers to take care of what they are responsible for. As long as we are being responsible (ie. getting our part of the project done, communicating our situation to others, etc.), then we don’t have much interest or concern around how others budget the rest of their time. We are free to do as we please because we can be trusted to do what we are supposed to do. As a result, tracking people’s time for vacation, sick, holidays, etc. just does not make sense. Track the results of the output, not the amount of time that it takes to execute it (or even which hours of the day were used to complete it).

On a related note, for the companies that do track hours work and audit leave days, many of them do not also track late night hours working on production deployments, site outages, etc. If you are going to track one, you must track the other. But it really makes no sense to track either. Companies trust these people to deploy and maintain their entire digital presence at midnight on a Saturday, but they won’t trust these same people to be responsible with their own time. That makes no sense!

At the core, Netflix does not have an “unlimited vacation” policy, we have a trust policy. If we trust you, then we will trust you and will give you great liberty to accomplish amazing things. If not, we will part ways.

API Revolutions and the API Strategy Conference

comments Comments Off on API Revolutions and the API Strategy Conference
By , February 25, 2013 7:31 am

Congratulations again to Kin Lane and everyone at 3Scale for a very successful and fulfilling API Strategy Conference. There were a lot of great presentations and panels as well a many very interesting hallway conversations.

And I was exited to be able to speak at the event! Embedded below are the slides from my presentation, complete with copious notes on each slide to provide the context of what I said during the talk.

The focus of the presentation was on API revolutions. We have seen a number of them in the recent years, but there have been significant and substantial changes for Netflix and for some others that warrant discussion. The question that remains is: Are these changes specific to a small handful of companies or are these companies representing things to come for the API world as a whole?

-Daniel

My Presentation at Intelligent Content Conference

comments Comments Off on My Presentation at Intelligent Content Conference
By , February 11, 2013 3:22 pm

Last Friday (February 8th), I spoke at the Intelligent Content Conference 2013.  When Scott Abel (aka The Content Wrangler) first contacted me to speak at the event, he asked me to speak about my content management and distribution experiences from both NPR and Netflix.  The two experiences seemed to him to be an interesting blend for the conference. And when I got to the conference, I was absolutely floored by the number of people who had already heard about NPR’s COPE model!

I have to admit, it had been a while since I last thought that much about the NPR days, but doing so brought back a lot of interesting memories.  When more deeply considering those experiences alongside my Netflix experience, I was able to see commonalities in practice, philosophy, execution and results (although at different scales).

At any rate, embedded below are the slides from my presentation.  I spent a good chunk of time commenting each slide as my presentations tend to be very image-heavy, which often results in lost context.  The comments have added that context back in.

Thanks again, Scott, for having me at the conference.  And thanks to all of the attendees with whom I spoke before and after my talk.  The event was a lot of fun!

-Daniel

Panorama Theme by Themocracy