Review of Rush’s 40th Anniversary Show – San Jose

By , July 27, 2015 1:16 pm
Rush in San Jose

Rush in San Jose

As my musical tastes have evolved over the years, one group has pervaded throughout: Rush.  My interest to them grew in parallel with my drumming, at least in my formative drumming years, and that attachment has persisted.  That said, as much as I love Rush, it has been a long time (Roll the Bones tour in 1991) since my last Rush show.  In fact, I didn’t really anticipate seeing them ever again, mainly because I tend to see live music at smaller, more intimate venues, but also because I just haven’t connected with anything they have done in the past 20 years (since Counterparts).  But this tour felt different to me.  This one, I needed to go to.  The Rush 40th Anniversary Tour is likely their last and one that celebrates an incredible run by this quietly influential band.  Moreover, their concert was going to cycle backwards through their copious collection of songs (roughly 170 in total, depending on how you count some of them), touching on Rush-nerd classics such as Natural Science and Jacob’s Ladder.  Again, this felt like a must attend event.

Show Duration

First of all, I do want to highlight the fact that the show had no opening band and lasted over three hours from start to finish.  Not bad for a trio of 60+ year olds, huh?

The show was scheduled to start at 7:30pm.  It actually kicked off at 7:45pm.  There was a fever pitch in the arena starting at about 7:20pm, lasting all the way until the opening video started off the show.  Any time the lights dimmed a little during the time leading up to the start of the show, the crowd noise would swell with anticipation, only to settle back down until the next false alarm.

Once they started, the first set lasted a little more than an hour, during which they played 10 songs, spanning the albums Clockwork Angels through Signals.  The band then took a “short break to rejuvinate” lasting about 20 minutes or so, after which they played another 16 songs (keep in mind that some of these songs lasted 10 or more minutes).  The show closed at 10:50pm.

Stage Design

 

Simple Stage Design for R40

Simple Stage Design for R40

Unlike previous Rush concerts that I have been to, this stage design was super clean and simple.  There we no big hats with rabbits in them, for example, bouncing around the stage at various points in time.  Most of the stage floor was open and flat, with just a big yellow R40 in the middle near the front.  There were some simple stage affects (e.g. popcorn machine, laundry machines, fake Marshall stacks, etc.) to either side of the drum set that we bordering on goofy, but I found them largely unoffensive.  It was also a bit entertaining to see the guys in orange R40 suits walking across the stage in the open to shift around those affects during the song.  I also appreciated the apparent obliviousness of the band members to these moments which were clearly designed to be there.

The biggest offense in the stage props came in the form of two crew members dressed in a horse suit who then marched across the stage holding a sign that read “Hey”, begging the audience to cheer the word in those empty spaces towards the end of the 2112 Overture.  I mean, seriously guys, this is your 40th anniversary tour.  The place is filled with Rush geeks who know what you expect.  A goofy horse is not necessary…

Behind the drum kit was a large screen flanked by two tall and narrow screens.  These screens were used to present a range of videos between and during songs.  More on this later, but I think it was classy, out of the way of the actual performance, and generally enhancing to the show.

Energy

This was the show of two sets.  In the first set, I will sum it up by saying that I felt like Geddy was thrilled to be there, Neil was thrilled to just be playing the drums, and Alex was just thrilled to be alive.

To be more specific, Geddy was bouncing around the stage, deftly managing his significant workload while occasionally doing his awkward duck walk.  His enthusiasm definitely could not have been mistaken.  Meanwhile, Neil was behind the kit with a typical stoic expression.  While he played pretty proficiently throughout the night, he just seemed hunched over the whole time like this tour was weighing him down.  I got the sense that he would be more happy just playing the drums back in their garage in Toronto.  Alex, well, he was very stationary for the first set and there was little evidence that he was even awake.  It really did feel like a two person band for most of the first set.  This is particularly surprising given that Alex is the youngest of the three – he should have the most energy.

Alex looking lifeless

Alex looking lifeless
(courtesy of Jeff Butsch)

Between sets, however, they took a 20-minute break.  I have no idea what they did back stage, but Alex was a different player afterwards.  Maybe he got an injection or perhaps he was uninspired by the more recent material (similar to the sentiment of the majority of the crowd, I presume)…  Whatever it was, the second set put the “life” back into Lifeson.  No change in energy or performance for the other guys, for the most part.

Neil

Neil definitely showed his age.  Don’t get me wrong, he is still a great rock drummer and he generally played well, showing the poise of someone who has nearly 50 years experience on his instrument.  I have two major critiques of him in this show though.  First, and more fundamentally, he continues to play most of the parts as he crafted them on the albums.  Granted, the parts are well-constructed.  But he has the ability and creativity to improvise or vary things up more.  It would be great to see the fills in Tom Sawyer and YYZ, for example, played differently than their first incarnation nearly 35 years ago.  The second issue is that he simply made a bunch of mistakes during the show.  I don’t mean mistakes like he played the wrong part.  I mean, he would occasionally stumble and miss the one.  The most obvious occasion of such a misstep was in the triplet-based fill leading into the guitar solo of Tom Sawyer – he must have played this a million times correctly in the past, yet this was a big flub.  Geddy and Alex played on and Neil caught up again, so no real harm.  I note this because it is really uncharacteristic of an otherwise very precise player repeating the parts he has always played.  To be clear, this didn’t really adversely affect my enjoyment of the show.  I just noted it as an oddity, one that is likely a result of an aging player who has been touring on and off for 40+ years.

Neil played a mini-solo in the first set.  He played a larger one in the second set which split the uprights between the beginning and the end of Cygnus X-1 (with the middle of the song getting excised completely).  I don’t have much to say about the solos aside from the fact that he played them well and that I am a little disappointed that the majority of both solos consist of parts that I have heard many times before.  There were definitely nuggets in both that were exceptional though.

Instruments

Neil and his drum set

Neil and his drum set
(courtesy of Jeff Butsch)

I loved Neil’s drum kit choices!  He ditched the circular drum set with all of the extraneous pieces in favor of his older and more traditional configurations.  Of course, the different eras have different percussive needs, so in exchange for the circulating kit Neil had his crew swap out kits during breaks so he can have the appropriate apparatus based on where the set was in the 40-year history.  For example, the kit he played after the big break included the chimes and bells for Xanadu and Closer to the Heart.  It was refreshing to see him get back to some of the roots and to match the simplicity of the stage design with the simplicity in his drum sets (with “simplicity in his drum sets” being a relative phrase).

Geddy with four instruments (vocals, keys, bass, guitar)

Geddy with four instruments: vocals, keys, bass, guitar
(courtesy of Jeff Butsch)

The other really noteworthy detail about the instruments is that both Geddy and Alex pulled out their double-neck guitars for Xanadu.  Those things look great!

Song Choice

First, coming up with a set list for these shows must have been a very difficult task to take roughly 170 songs or so and boiling them down to a set list that would cover about two hours and forty five minutes worth of material.  There are the obvious songs that you would expect, like Spirit of Radio, Closer to the Heart, Tom Sawyer, and Subdivisions.  But I was thrilled that they pulled out some others that cater to the more committed fan, like 2112 and part of Hemispheres.  Songs that surprised me that they didn’t play include Limelight, Show Don’t Tell, and The Trees, I would have gladly forsaken any of these in favor of La Villa Strangiato, Vital Signs, and By-Tor.  All in all, a reasonable set list given the concept they were shooting for.  That said, if I were them, I probably would have trimmed the bookends and had the entire set be material from between Fly By Night and Roll the Bones, giving them more time to fill out the set with their stronger material.  Again, just my opinion.

Video Interludes

There were a bunch of videos that they played throughout the show.  Overall, I found them to be a lot of fun and a good break from the music.  It was great to see the cameos from a range of comedians like Paul Rudd, Jason Segal, Eugene Levy and Jerry Stiller.  I like how they didn’t take themselves too serious during the interludes – they were able to laugh at themselves a bit.  Good stuff.  The videos during the music were also interesting at times, but I found myself not paying attention to them during the actual performance.  Maybe this is because I was sitting off to the side more.


All in all, this was a great show.  With the ticket prices what they were, I feel like I did my part in sending this under-appreciated band out to pasture… but I don’t feel bad about spending that money at all.  The material is as great as ever and they played it well and with dignity.

Thanks for all the memories, Rush!

 

The future of API design: The orchestration layer

By , January 18, 2014 9:24 pm

The digital world is expanding at an amazing rate, giving us access to applications and content on myriad connected devices in your homes, offices, cars, pockets and on even on your body. The glue that allows all of this to happen, that connect the companies who provide these services to the devices that you use, is the API.

Because APIs have such a huge responsibility for so many people and companies, it is natural that API design is often one of the industry’s liveliest discussions, touching on a range of topics including resource modeling, payload format, how to version the system, and security.

While these are likely important areas to explore when designing virtually any API, the reality is that a much larger decision needs to be made first. That decision is based on a fundamental question: who are the primary audiences for this API and how can we optimize for those audiences?

This seems like an innocuous enough question, but don’t underestimate its importance or complexity in the growing world of APIs.

Years ago, this question was much simpler

At that time, many emerging APIs were being built as open or public APIs, targeting a large set of unknown developers (LSUDs) external to the providing company.

Because of the (hopefully) vast numbers of external developers using the API representing different use cases, the most sensible way to design the API for this audience is to have the providing API team design it in a very clean, concise, and resource-oriented way that closely represents the data model and/or features of its source(s).

In a previous post, I referred to these as OSFA APIs (or one-size-fits-all APIs). Allowing for such granularity in the modeling means that any developer who wishes to use the API can mix and match the elements in whatever way they choose to satisfy their application without further API team involvement.

The resource-model approach to designing an API can be very powerful, especially for this type of audience. The problem with this approach, however, is that the way that many companies use APIs today is different than described above. While many are still supporting the use cases of LSUDs, more are using their APIs to support a growing mobile or device strategy.

For some of these implementations, the engagement with the developers is different. The audience is a small set of known developers (SSKDs). They may be engineers down the hall from the API team, a contracted company hired to develop an iPhone app, or an engineering team in a partnering company. In all of these cases, however, the API team knows who these people are (at least in the abstract sense).

More importantly, however, the API team and the providing company care about the success of these implementations in a different way than they might care about the applications developed by the LSUDs. In fact, the success of the SSKDs may very well be paramount to the success of the business as a whole, a model that is becoming increasingly more pervasive.

Because of this change in audience and the deep interest in their success, there is great opportunity to change the API design.

For the SSKDs, having granular resource-based APIs that closely represent the data model works, but it just isn’t as optimal as it could be. This is especially the case when you consider the growing number of device types in the world and the fact that more and more companies’ business strategies are dependent on providing value to customers on such devices.

So, all it takes is a couple of devices with diverging needs and/or capabilities, each of great import to the company, for the resource-based API to start to show some warts. Making the API better, more optimized, for each of these target applications is the next logical, and most critical, step.

Enter the Orchestration Layer

An API Orchestration Layer (OL) is an abstraction layer that takes generically-modeled data elements and/or features and prepares them in a more specific way for a targeted developer or application

To address this opportunity, more companies are employing orchestration layers into their API infrastructure. While there are many ways in which to implement this architectural construct, the concept remains the same across all of them.

Below, I will describe a few of the more common patterns that I have seen (and/or been involved in implementing). But first, here are a few key principles that need to be considered when building an OL:

1. Most APIs are designed by the API provider with the goal of maintaining data model purity. When building an OL, be prepared to sometimes abandon purity in favor of optimizations and/or performance.

2. Many APIs are designed by API teams to make it easier for the API team to support. When building an OL, be prepared to potentially add complexity for the API team (or other teams, depending on the way it is implemented).

While this sounds undesirable, the goal here is to dramatically improve efficiency and/or simplicity for other people at some mild cost to the API team. Also keep in mind that such costs can potentially be programmed away over time.

3. It is important to understand the breadth of the audiences for the API.Depending on those constituents, you may only need the OL. In other cases, you may need the OSFA foundation in addition to the OL.

Here are a few examples of how some OLs have been approached:

Device-specific wrappers

This is the most common pattern that I have seen because most companies that are experiencing the distress referenced above already have APIs that they still use, continue to support, and invest in. The result is to continue to offer the granular resources as they always have, but to offer a wrapper tier on top of them – with new endpoints that are tailored to specific developers, devices or device clusters.

In this model, the API team will work more closely with, for example, the iPhone team to write a custom wrapper that handles specific requests and deliver specific payloads that are optimized for the iPhone app. In this model, most often the team to build the endpoints and the wrappers is the API team although that doesn’t have to be the case.

Query-based APIs

In this model, the API team is putting the power in the hands of the requesting developer, although that power is limited. The goal here is to create a more flexible way in which the requester can make requests and tailor payloads without putting additional ongoing burden on the API team, as could be the case with the Device-Specific Wrappers.

This is achieved by breaking down the resource-based APIs and allowing them to be queried against like a database through flexible parameters and payloads that can contract, expand and possibly morph based on what is needed. The benefit here is that once the query language is set, the API team does not need to keep writing wrappers as new implementations are needed for different devices.

The detriment, however, is that the query-based API is still a set of rules to which the developer needs to adhere, although these rules are much more flexible than the resource-based API model.

Experienced-based APIs

resource v experience apis The future of API design: The orchestration layer

This is the model that Netflix has implemented, which in some ways is a blend of the two above. In this model, we basically have device-specific wrappers but they are designed, implemented and owned by the device teams.

A key concept here is that we have put the API team in the position of gathering the data in a generic, reusable way while putting the device teams in the position of owning the data formatting and delivery. After all, the formatting needs evolve in concert with the UI changes so putting that effort in the hands of those closest to the changes eliminates additional steps.

(For more details on how this system operates, see the links at the bottom of the post.)

As I noted, the range of implementations is potentially much more diverse than these three, although these are some of the most consistent and interesting patterns that I have seen. Regardless of how this is achieved, however, the key is for the API team to stop supporting the API as a service that is designed independent of those SSKDs who consume it.

Rather, the API team needs to view the SSKDs as partners in the design with an interest in making the products as great as possible so the end-users can get the best experience possible. The API team has the opportunity to build services that help developers to be better at developing by focusing on optimizing for the developers’ needs rather than how to optimize the time spent supporting the API.

Given the opportunity ahead with the potential number and diversity of connected devices, the effort to provide such optimizations is a small price to pay for the massive upside.

For more information on the Netflix use case, the problems we encountered that prompted our redesign, and how we implemented our Experienced-Based API, here are a few links:

Why You Probably Don’t Need an API Strategy

By , November 9, 2013 1:04 pm

“Strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat.” – Sun Tzu, The Art of War

Over the course of 2013, the API industry has matured a great deal. Not only have we seen many of the major vendors (ie. ApigeeMashery3ScaleLayer7, etc) get acquired and/or receive large rounds of funding, we are also seeing an uptick in new players, new tools and services, new publications, and even a series of API-focused conferences.

Meanwhile, according to a recent survey by Layer7, more than 85% of companies expect to have an “API program” within the next five years. All of this is evidence that the appetite for tools and information about APIs is robust. Accordingly, there is no shortage of people and companies attempting to satisfy that hunger. The question I have amidst this growth, however, is whether the concepts around API strategies being served by some is the right meal for those who are eager to feast on APIs.

The problem: “API Strategy”

The majority of the non-technical conversations in the API industry seem to be focusing on terms like “API strategy” and “API economy.” In fact, I even co-wrote a book called APIs: A Strategy Guide a couple of years ago, further facilitating the use of those words in the API vernacular. There is absolutely a strong case to be made for needing an API strategy for certain situations. But how many companies should really be thinking about their API in that way?

Before continuing, it is worth being clear on what I mean by the terms “strategy” and “tactic”. Bobby Ghoshal puts it nicely in his post, Greeks Gave us Strategy vs. Tactics: Now Understand the Difference where he says, “A strategy is a grand plan, a tactic is a specific measure implemented to push the grand plan forward.” Applied to APIs, if there is an API strategy then it means that API is the product in-and-of-itself. In other words, the API is the target of a distinct business and opportunity (with its own metrics), which will then have a range of tactics to support it.

There are certainly cases where APIs are businesses and where a strategy is appropriate. The most common example of an API strategy is around companies who aspire to build a developer community as a new revenue source or as the foundation of their business. Twilio is an interesting example of such a company. Twilio’s strategy is to offer APIs that tap into their backend services to allow developers to build apps supporting their communication initiatives.

In this case, the API is a strategy, one that is fundamental to the business as a whole. Accordingly, Twilio invests heavily in the API, supporting documentation for it, fostering the developer community, and all of the other things one would expect such a company to do for their public API (and some would suggest that they do this as well as or better than anyone else). Twilio should invest heavily in this — a significant portion of the opportunity is predicated on the success of the API program.

The reality: “API as a tactic”

But most companies should not be trying to set up distinct businesses with their APIs as the focal point. They should not be trying to generate new revenue streams or reach new audiences through such programs. Instead, most companies should be focusing on their core business and then designing APIs that support larger strategy.

In pursuing that route, most companies should not be discussing their “API Strategy,” they should be talking about their API as a tactic in support of their broader business strategy and objectives.

An example that I am very close to is Netflix. In the early days of the Netflix API when the program was targeted exclusively to public developers, the API had its own metrics and its own objectives, all of which were designed to support the primary goals of the company.

In this sense, the Netflix API was a product designed to offer incentives to developers to motivate them to build applications around the Netflix experience. These applications would hopefully reach new audiences to generate new subscribers and/or create new user experiences for existing subscribers that would increase their satisfaction with our service. Although the API was treated as a new product within Netflix, it was still operating under the company’s larger business objectives.

While the original vision was incrementally valuable to the company, the results were not as transformative as originally expected. As a result, we pursued a new approach with the API, using it to drive the larger strategy of device proliferation for our growing streaming business. In this sense, the API was transitioning from a product to a tactic.

Today, Netflix can be watched on more than 1,000 different device types, the vast majority of which are developed by Netflix-employed UI Engineers. The API served as an excellent engineering tactic that allowed us to quickly get on more devices, which in turn allows us to create a better overall experience for our customers.

More changes have since been made with the API. Most recently, the Netflix API team, which used to provide traditional REST APIs to the Netflix UI teams, is now providing content distribution platforms that enable data to be pushed from our AWS backend systems to the devices in people’s homes and pockets. We are no longer truly an API team, we are a team that embraces the differences of the different devices and empowers the UI teams to customize and optimize the request/response models needed for their specific device. In other words, we are now a platform for API development.

All of these pivots within Netflix further demonstrate that our API is nothing more than a tactic to achieve our broader goals. There are no allegiances to a tactical solution. Tactics can (and should) be modified, discarded and replaced as appropriate. Strategies, on the other hand, should have longer shelf-lives, evolving over time but less frequently overhauled.

The majority of companies that are considering API implementations, based on my conversations and experience in the industry, are more like Netflix than Twilio. There are countless examples of companies who have made similar pivots to refocus their API attention towards supporting the company’s primary business objectives. These examples range from media companies (NPR, The New York Times, The Guardian) to financial institutions (PayPal, E-Trade) to social media sites (Twitter, LinkedIn).

Even service companies like Amazon and Salesforce, whose systems are differentiated in part by their APIs, use them as a tactic to provide increased value for the primary business, which is providing robust services supporting cloud computing and CRM respectively.

The bottom line

The key to a successful API program is to know your audience. Your audience is defined by your business opportunity. So, be very thoughtful about the opportunity and then define your API accordingly. In some cases, pursuing the public developer opportunity is absolutely the right thing to do and it may have a tremendous upside (although realizing that upside is quite rare).

However, if your opportunity is truly to support a broader business objective, then launching a public developer program is not likely to yield large dividends. It is more likely to come with increased costs and risks that will weigh down your returns, dilute your resources for the larger opportunity, and distract you from the real prize. Instead, focus all of your energy on building a great system that helps you optimize for the larger goal. And don’t be married to that system as it is nothing more than a tactic.

Ultimately, if you know your audience, you can define a strategy and then design the right tactics. Otherwise, brace yourself for a very slow route to victory or, more likely, a noisy defeat.

This post originally appeared on The Next Web on September 15, 2013.

 

Netflix Does Not Have an Unlimited Vacation Policy

By , October 28, 2013 1:02 pm

I am finding it increasingly common, when talking about working at Netflix, for people to tell me that their companies are embracing an unlimited vacation policy similar to the one Netflix has been employing for years. When I hear these kinds of statements, my first thought is that these programs are not likely to have the desired outcome. I think that because Netflix does not have an unlimited vacation policy. Rather, Netflix has a culture of “Freedom and Responsibility” (F&R), which is discussed extensively in our publicly available culture slides.

The culture of F&R extends much further than its application to vacations, sick days, and holidays. F&R is a mindset on how we treat each other and our jobs on a day-to-day basis. It is a simple mantra that basically states that we are all adults, so let’s all behave and treat each other like adults. And if a person is not consistently behaving like an adult, they should not work at Netflix.

So, what does it mean to “behave like an adult”? At the core, Netflix seeks out senior-level people who are seasoned, both in their primary competency as well as in how to work effectively with others. We expect people to live up to the values that we discuss in the culture slides, to be people on which we would want to trust the future of our business. In fact, we are trusting the future of the business on each and every person who works with us. And that is the central point of it all… trust. If we cannot trust you, you should not be working at Netflix. Period.

This kind of trust takes many shapes. One example is that there are no strict silos in our server access rights. Virtually every engineer at the company can easily get root-level access to virtually any system in the production environment. Another manifestation is the fact that virtually everyone in the company knows key strategic initiatives that we are focused on well before these initiatives go public. As a result, everyone in the company is subject to trading windows for our stock options. That level of openness, at a company the size of Netflix, is exceedingly rare. And we can do all of this because we hire (and fire), essentially, for that deep level of trust.

With respect to vacation, that is just another manifestation of our trust as it pertains to F&R. We trust our employees and peers to take care of what they are responsible for. As long as we are being responsible (ie. getting our part of the project done, communicating our situation to others, etc.), then we don’t have much interest or concern around how others budget the rest of their time. We are free to do as we please because we can be trusted to do what we are supposed to do. As a result, tracking people’s time for vacation, sick, holidays, etc. just does not make sense. Track the results of the output, not the amount of time that it takes to execute it (or even which hours of the day were used to complete it).

On a related note, for the companies that do track hours work and audit leave days, many of them do not also track late night hours working on production deployments, site outages, etc. If you are going to track one, you must track the other. But it really makes no sense to track either. Companies trust these people to deploy and maintain their entire digital presence at midnight on a Saturday, but they won’t trust these same people to be responsible with their own time. That makes no sense!

At the core, Netflix does not have an “unlimited vacation” policy, we have a trust policy. If we trust you, then we will trust you and will give you great liberty to accomplish amazing things. If not, we will part ways.

API Revolutions and the API Strategy Conference

By , February 25, 2013 7:31 am

Congratulations again to Kin Lane and everyone at 3Scale for a very successful and fulfilling API Strategy Conference. There were a lot of great presentations and panels as well a many very interesting hallway conversations.

And I was exited to be able to speak at the event! Embedded below are the slides from my presentation, complete with copious notes on each slide to provide the context of what I said during the talk.

The focus of the presentation was on API revolutions. We have seen a number of them in the recent years, but there have been significant and substantial changes for Netflix and for some others that warrant discussion. The question that remains is: Are these changes specific to a small handful of companies or are these companies representing things to come for the API world as a whole?

-Daniel

My Presentation at Intelligent Content Conference

By , February 11, 2013 3:22 pm

Last Friday (February 8th), I spoke at the Intelligent Content Conference 2013.  When Scott Abel (aka The Content Wrangler) first contacted me to speak at the event, he asked me to speak about my content management and distribution experiences from both NPR and Netflix.  The two experiences seemed to him to be an interesting blend for the conference. And when I got to the conference, I was absolutely floored by the number of people who had already heard about NPR’s COPE model!

I have to admit, it had been a while since I last thought that much about the NPR days, but doing so brought back a lot of interesting memories.  When more deeply considering those experiences alongside my Netflix experience, I was able to see commonalities in practice, philosophy, execution and results (although at different scales).

At any rate, embedded below are the slides from my presentation.  I spent a good chunk of time commenting each slide as my presentations tend to be very image-heavy, which often results in lost context.  The comments have added that context back in.

Thanks again, Scott, for having me at the conference.  And thanks to all of the attendees with whom I spoke before and after my talk.  The event was a lot of fun!

-Daniel

Why REST Keeps Me Up At Night

By , December 11, 2012 4:43 am

This post first appeared on ProgrammableWeb.com

With respect to Web APIs, the industry has clearly and emphatically landed on REST as the standard way to implement these services. And for good reason… REST, which is generally implemented as a one-size-fits-all solution, is an excellent choice for a most companies who wish to expose their content to third parties, mobile app developers, partners, internal teams, etc. There are many tomes about what REST is and how best to implement it, so I won’t go into detail here. But if I were to sum up the value proposition to these companies of the traditional REST solution, I would describe it as:

REST APIs are excellent at handling requests in a generic way, establishing a set of rules that allow a large number of known and unknown developers to easily consume the services that the API offers.

In this model, everyone knows how to behave and it can be incredibly powerful. The API providers establish a set of rules and the API consumers must adhere to those rules to get what they want from the API. It is perfect, right? In many cases, the answer is obviously yes. But in other cases, as our world scales and the number of ways for people to consume digital content and services continues to expand, this one-size-fits-all model is likely to fall short.

The potential shortcomings surface because this model assumes that a key goal of these APIs is to serve a large number of known and unknown developers. The more I talk to people about APIs, however, the clearer it is that public APIs are waning in popularity and business opportunity and that the internal use case is the wave of the future. There are booksarticles and case studiescropping up almost daily supporting this view. And while my company, Netflix, may be an outlier because of the scale in which we operate, I believe that we are an interesting model of how things are evolving.

Netflix is currently available on over 800 different device types, including game consoles, mobile phones, TVs, Blu-ray players, tablets, computers, and almost any other device that can stream video. Our API alone handles more than two billion incoming requests on peak days, which translates into almost ten billion real-time outgoing requests from the API to internal dependency services. These numbers are up by about 70x from just two years ago. Most companies do not have that kind of scale, but it is clear that with the continued growth of the device market more companies are resetting their strategies to be less about the public API and more about internal consumption of their own APIs to support device proliferation. When this transition occurs, the API is no longer targeting “a large number of known and unknown developers.” Rather, the key audience is a small number of known developers.

The potential conflict between the internal and public use cases is in the design of the API itself. Keep in mind that the design implications will not be problematic in many scenarios. It becomes a potential problem if the breadth of devices becomes so wide that the variability of features across them becomes substantially harder to manage. It is the breadth of devices that creates a problem for the one-size-fits-all API solutions.

If your target is a small group of teams with whom you have close relationships, the dynamics around the API change. For Netflix, we persisted on the one-size-fits-all REST model for quite a while as more and more devices got added on top of the API. But given our scale, one thing has become increasingly obvious. Our REST API, while very capable of handling the requests from our devices in a generic way, is optimized for none of them. This is the case because our REST API focuses on resources that are meant to be granular representations of the data, from the perspective of the data. The granularity is exactly what allows the API to support a large number of known and unknown developers. Because it sets the rules for how to interface with the data, it also forces all of the developers to adhere to those rules. That means that each device potentially has to work a little harder (or sometimes a lot harder) to get the data needed to create great user experiences because devices are different from each other.

The differences across these devices can be varied and sometimes significant. Here are some examples of variances across devices that may be challenging for one-size-fits-all models:

  • Different devices may have different memory capacity
  • Some devices may require a unique or proprietary format or delivery method
  • Some devices may perform better with a flatter or more hierarchical document model
  • Different devices have different screen real estate sizes which may impact which data elements are needed
  • Some devices may perform better having bits streamed across HTTP rather than delivered as a complete document
  • Different devices allow for different user interaction models, which could influence the metadata fields, delivery method, interaction model, etc.

Just think about the differences between an iPhone and your TV and how they beg for different user experiences. Moreover, the XBox and the Wii, both of which project to the TV, are different in the way users interact with them as well as in the hardware constraints, both of which may require different APIs to support them. When considering more than 800 different device types, the variance across them becomes overwhelming. And as more manufacturers continue to innovate on these devices, the variance may only broaden.

How do you know if your company is ready to consider alternatives to the one-size-fits-all API model? Here are the ingredients needed to help you make that decision:

  • Small number of targeted API consumers is the top priority
  • Very close relationships between these API consumers and the API team
  • An increasing divergence of needs across the top priority API consumers
  • Strong desire by the API consumers for more optimized interactions with the API
  • High value proposition for the company providing the API to make these API consumers as effective as possible

If these ingredients are met, then you have the recipe for needing a new kind of API.

Because of the differences in these devices, Netflix UI teams would often have to do a range of things to get around our REST API to better serve the users of the device. Sometimes, the API team would be required to extend the base service to handle special cases, often resulting in spaghetti code or undocumented features. And because different teams have different needs, in the REST API world, we would often need to delay feature development for some due to the challenges around prioritization. In addition to these kinds of issues, significant performance and/or architectural problems are bound to emerge. For example, these more granular APIs often result in chattier interactions between device and server or chunkier payloads, as I discussed in a previous post on the Netflix Tech Blog.

To solve this issue, it is becoming increasingly common for companies (including Netflix) to think about the interaction model in a different way. Rather than having the API create a set of rather rigid rules and forcing the various devices to follow them, companies are now thinking about ways to let the UI have more control in dictating what is needed from a service in support of their needs. Some are creating custom REST-based APIs to support a specific device or category of devices. Others are thinking about greater granularity in REST resources with more batching of calls. Some are creating orchestration layers, such as ql.io, in their API system to customize the interaction. These are all smart and practical ways around the problem. But with the growing number of devices, the increasing urge for companies to be on as many of them as possible, and the desire for continued innovation across these devices, these various solutions are still somewhat restricted. They are still forcing the developers to adhere to server-side rules and non-optimized payloads in an effort to have a one-size-fits-all solution. These approaches are closer to the flexibility needed in that they are not as rigid as the typical REST-based solution, but when supporting as many devices as Netflix does, we believe they fall short for us.

For Netflix, our goal is to take advantage of the differences of these devices rather than treating them generically. As a result, we are handing over the creation of the rules to the developers who consume the API rather than forcing them to adhere to a generic set of rules applied by the API team. In other words, we have created a platform for API development.

7 Ways to Make Your API More Successful

By , December 10, 2012 8:35 pm

The purpose of a content API is to make the content available to its audience in the most useful and efficient way possible. To be a useful API, it needs to help the developers make their jobs easier. This could mean a wide range of things, including making it easier to dig into the API, allowing for greater flexibility in the responses, improved performance and efficiency for both the API and its consumer. Below are seven development techniques (all of which are part of the NPR API) that can help content providers improve the usefulness and efficiency of their APIs on both sides of the track. These techniques played a critical role in the success of the API which now delivers over 700 million stories per month to its users (more stats on the NPR API coming soon on our Inside NPR.org blog).

Be Flexible: Support Multiple Output Formats
Making the API as available and accessible as possible is very important in drawing developers to use it. So providing the content in a range of formats will increase the likelihood that the developer can rely on existing libraries and make as few changes to the code as possible.

The NPR API offers eight different output formats in an effort to improve efficiency for the developers. Above is a graph demonstrating the distribution of requests for each of the formats in July of 2009. As you can see, the majority of requests are to our proprietary XML markup (NPRML). That also means that almost 50% of the requests, or about 20M requests per month, use the other seven formats. In offering offering these other non-proprietary XML formats, the API is able to support developers that may have existing applications that pull in content in one of these standardized format, such as MediaRSS or Atom.

To make it even easier for people to use the API, NPR also launched with JavaScript and HTML “widgets”. The other six formats require more sophistication in order to put the content in an application or website. The widgets, however, are pre-designed feeds of NPR content (based on the developer’s selections) that can be easily dropped into a page.

Be Efficient: Handle Partial Response
This concept is now starting to get some more traction, now that Google announced partial response handling for some of their APIs. NPR’s API also makes extensive us of this feature because it really is tremendously valuable to the provider and the consumer of the API. For example, NPR stories contain a wide variety of fields and assets in the API. If the consumer is forced to handle the complete document, even if they only want a few fields, they have to endure all of the latency issues from the API itself as well as the additional processing power needed to handle the undesired fields.

As a result, NPR incorporated a “fields” parameter (the same parameter name used by Google) that can be used in the query string to limit the resulting document to only the fields of interest. This approach creates documents that are smaller and much more efficient. Overwhelmingly, more requests to the NPR API contain the fields parameter than those that do not (in fact, it isn’t even close).

Here are a few examples of how the same query to the NPR API, returning the same stories, delivers different documents based on the fields parameter (you will need to register for your own NPR API key to execute these queries):

http://api.npr.org/query?id=1001&apiKey=your_api_key

http://api.npr.org/query?id=1001&fields=title&apiKey=your_api_key

http://api.npr.org/query?id=1001&fields=title,teaser,text,image,audio&apiKey=your_api_key

An extension of partial response is to allow the developer to specify the number of items they would like in return. Some APIs return a fixed number of results, which can bloat the document just like the extra fields can. The NPR API, to counter this, allows the developer to pass in the number of results desired (with a fixed ceiling for any given request). To dig deeper in the results, we incorporated a “pagination” feature in the API. Here are some examples of how to control the number of stories:

http://api.npr.org/query?id=1001&numResults=5&apiKey=your_api_key

http://api.npr.org/query?id=1001&numResults=5&startNum=6&apiKey=your_api_key

Give Them Control: Allow for Customizable Output Markup (“Remapping Fields”)
As mentioned in the transform section, if the API can easily serve existing applications that expect specific markup, it potentially increases adoption and improves developer efficiency. To extend that functionality, the NPR API offers a function that we call “Remap” which essentially lets the developer modify the name of one or more XML elements or attributes in the output at request time. This is done in the query string and the API transforms the markup accordingly in real-time. Here are a few examples:

In this example, the remap parameter changes the story title to < specialTitle >:

http://api.npr.org/query?id=1001&remap=list.story.title:specialTitle&apiKey=your_api_key

In this example, the remap parameter changes the story title to < specialTitle > and it changes the image caption to < imageCaption >:

http://api.npr.org/query?id=1001&remap=list.story.title:specialTitle,list.story.image.caption:imageCaption&apiKey=your_api_key

In this example, the remap parameter changes the audio element’s id attribute to be named audioId:

http://api.npr.org/query?id=1001&remap=list.story.audio~id:audioId&apiKey=your_api_key

Another benefit to remap (which we have fortunately not had to use) is that it can be used to handle backward compatibility as the API grows and changes. NPR’s philosophy is to make sure that upgrades do not adversely affect existing functionality. That said, if an element or attribute does need to change, we could execute apache rewrites for all old API calls and have the remap function applied to have the output match that of the old markup. Alternatively, the developer could simply modify their API call instead of having to change their codebase to match the markup changes. (Although we do not intend to change existing markup, if we do, we would advise developers to upgrade their code accordingly. That said, rather than having the applications fail during the transition, remap could be used to temporarily handle requests until the full codebase can be upgraded).

Be Fast: Set Up a Comprehensive Caching Architecture
Performance is another critical aspect of APIs when it comes to enticing developers to use them. After all, if the API is sluggish, developers may not want to depend their application on it.

Smart caching of queries and results can really improve the speed of the system. NPR has implemented several layers of caching for the API, as follows:

  • Base XML – Caching the full document for each item is important to prevent the system from executing disk I/O before doing any transform. We cache the Base XML first in memory and secondarily as XML files to eliminate the need to access our content database.
  • Full Query Results – When compiling the list of items to be returned for any given story, it is important to cache the full list because popular applications that have many concurrent users (such as NPR Addict) are very likely to execute the same queries and expect the same results. The cached result is a single document containing the full list of all items and the full base XML for each.
  • Transformed Query Results – The calling application, such as NPR Addict, expects the document to be transformed to fit the application’s needs. So, the results that get cached in Full Query Results may get transformed to MediaRSS while simultaneously removing extraneous fields. Caching the final results that get returned to the calling application enable fastest performance without compromising the system’s ability to use the other caching layers to produce different versions of the document.

npr_architecture_diagram_490
Click here for an enlargement of this architecture diagram

Give Them Tools: Provide a Query UI with the Documentation
There are two truths about developers and documentation: the former always expects the latter, but seldom uses it. Of course, you cannot have an API without providing comprehensive documentation. That said, offering a simple user interface that helps developers get what they need from the API wil increase adoption and make life easier for them.

NPR’s API launched with a tool that we call the Query Generator. This tool exposes more than 6500 query-able IDs, methods for controlling the output format, fields to be returned, date and search restrictions, pagination, and more. Using the interface, the developer can select their options and have the tool create the query string for their API request. The developer can also see the results of that query inline before commiting it to their application. Almost exclusively, developers (including the NPR staff) use this tool to create queries, rather than reading the documentation.

Be Open: Eliminate Rate Limiting
Throttling or limiting access to APIs is an inherent disincentive for developers. Moreover, it is actually a detriment to the API provider. After all, the purpose of the API is to grant access to the content. If a given developer can only call the API 5000 times a day, and that developer creates a hugely popular application, the rate-limiting will inherently stifle the developer and the viral nature of the API.

Granted, most APIs use rate-limiting or tiered access levels to allow business people to control the graduation of API users. This seems counter-productive to me though. The better approach is to open access completely, identify those incredibly successful usages, then work with the developer accordingly on a mutually beneficial relationship. This way, applications are given full ability to grow and mature without arbitrary constraints.

Other APIs implement rate-limiting to protect the servers from unexpectedly high load. This is a legitimate risk which, if encountered, can adversely affect the performance of all users. That said, building complicated features into the system, such as rate-limiting, can be much more costly than configuring a scalable server architecture. Moreover, each request to the API will see slight latency increases as a result of the rate-limiting analysis. I know that latency is marginal, but why introduce any additional latency, especially when creating disincentives for developers?

Be Agile: Practice Iterative Development
Building your API over time has several benefits. First, it signals to the developer community that this API is meaningful to the provider and will continue to grow and get supported over time. This sounds trivial, but it is a very important part of the relationship with the community. If developers are not sure about your commitment to the API, are they likely to spend their own time building an application around it?

Another benefit of iterative development is that you do not have to get the API perfect the first time. I will qualify that by saying that, as a matter of principle, any release for an API should be done with the expectation that it will be supported for a long time. This is important because changes to existing API features will break the applications of those that use them. When I say the API doesn’t have to be perfect, I mean it does not have to be complete. New features can (and should) be added over time, extending its capability and making it more attractive for potential developers.

To put it another way, you will not have every detail of the API solved at the initial launch. It is much better to go live with the features that you know well while deferring those that you do not. Trying to cram in tenuous requirements will create headaches for you and for the community down the road. Spend the time necessary on figuring out the features, the supporting markup, the access and error methods, etc. before you commit to an API feature.

How to Make Money With Your API

By , December 10, 2012 8:30 pm

This post first appeared on ProgrammableWeb.com

One of the questions that I am most frequently asked regarding content APIs is “how can I make money with my API?” Before answering that question, however, it is important to ask for whom the API is designed. After all, the audiences for your API will determine what business opportunities exist.

The most common target audience for APIs is the developer community. While that audience is an interesting and potentially important one, it is not where the greatest value can be realized.

When we launched the NPR API in 2008, we established four target audiences, each of which were important. The target audiences were (and still are):

  • NPR: NPR is of highest importance because as we build all of our systems, mobile apps, etc., it was important to be as nimble and efficient as possible. We have adopted this so deeply that the API is the foundation of everything that we do, including acting as the content source to NPR.org.
  • NPR member stations: NPR member stations are a critical aspect of the NPR mission and business model. Offering the stations a new, more effective way to get NPR content in a robust way better serves the stations and their communities, as well as NPR.
  • NPR partners: Having the API quickly became a more effective way to interact with content aggregators, business partners and other commercial entities with whom we established relationships. In fact, the API became a business development tool where some external organizations approached us because we had a robust API.
  • the general public: Finally, as part of our public service mission, it was and is important for NPR to share our content with the world. Exposing it to the developer community is a natural extension of this effort. But when we launched the API, we fully expected this to be where true innovation took place with the API. In fact, the day after our launched, I told CNet that the community of “developers will come up with a lot of brilliant ideas.”

With the API live for a full two years, I decided to look more closely at how effectively the API has been serving these four audiences. Although I am not surprised by the results, you may be…

The following charts show the distribution of how many API keys are registered by each of our four audiences. That metric is then compared to the consumption of the API (as measured by API requests) by the four audiences:

Obviously, there are many more API keys registered to the general public than the others. In fact, our API currently has over 10 times more public keys than all other keys in the system combined.

Despite the disparity between public keys and those used by other audiences, the dominant group from a request perspective is overwhelmingly NPR, responsible for more than 92% of the total number of requests. That means that the remaining 8% of requests are coming from all three other target audiences combined.

When considering this distribution in requests by audience relative to the key distribution by audience, it is clear that NPR has by far been the most effective user of the API. So, given the incredible amount of consumption by NPR, how has that translated into revenue opportunities? Below is a chart detailing the growth in total page views across all NPR platforms over a twelve-month span:

By the end of the twelve months, NPR’s total page view growth has increased by more than 100%. How were we able to add that many page views in such a short amount of time? The API. Not directly. But the API did enable NPR product owners to quickly, efficiently and independently build specialized apps in various new platforms. As a result, what we have seen is primarily additive growth. In other words, in addition to NPR.org’s growth (by about 19%), we have been able to add the NPR News iPhone app, the improved mobile site, the Android app, the iPad app, etc., each of which adds page views. From our analysis, adding these new platforms is generating new traffic and is not cannibalizing page views from NPR.org in a substantive way. These new page views create new sponsorship/advertising inventory that create new revenue opportunities.

So, when asked the question “how can I make money with my content API?”, the answer should always be based on your target audiences. And from NPR’s experience, the best way to make money is to focus on how the API can improve your internal processes. Of course, it is still important to maintain a solid support and growth model for the other audiences as well, but we cannot all be Google, Netflix, Twitter, etc. Unless you are planning to spend a lot of money on community engagement, you are better served by making sure you can liberate your product owners and grow your business more quickly, efficiently and independently.

In other words, don’t assume that the API’s primary audience is the developer community. Question that default position and do the introspection that will enable you to get the maximum value out of your API.

Content Portability: Building an API is Not Enough

By , December 10, 2012 8:20 pm

This post first appeared on ProgrammableWeb.com

My previous posts focused on COPE (Create Once, Publish Everywhere) and content modularity, the fundamentals for ensuring that content can be managed and distributed to virtually any platform. But ensuring that your content can be delivered to those other platforms does not mean that it can display appropriately on them.

Content often contains very important semantic markup, used to emphasize the content, relate it to other content, describe it, etc. By markup, I mean HTML, character encodings and microformats, among others. Although this markup is important to the content, it also makes it “dirty”, potentially compromising its ability to live and flourish in the myriad places to which it will get distributed. No matter how modular the content is in the database, if it is sullied by this markup, it is not truly portable. As a result, just building an API is not enough. The API needs to be able to distribute the content to any platform in a way that each platform can handle.

To demonstrate this problem around portability, I often use the pre-iPhone iPod as an example. This device did not parse HTML. Rather, tags would simply be printed as strings. When podcasting took off, some NPR titles had HTML tags in them, including < em > and < strong >. Because iPods were not able to render the HTML, titles would like something like, “This is a < em >great< /em > title!” Similar, another fail scenario that is relevant to NPR is an HD Radio display. These devices are also not able to render markup printing these tags to the screen.

There are two primary ways of handling this problem. The more common way is to store the dirty content in the database and to maintain a series of scripts that handle it on the way out. Although this is potentially effective for specific goals, there are some significant problems with it. For starters, stripping out the markup as it gets distributed means that the markup still lives with the content in the database. As a result, as new platforms arise and as markup standards evolve, the markup in the content will remain static. So, each distribution script that handles the markup will need to be carefully maintained and updated accordingly. Moreover, since each distribution platform could have its own compliance with the various forms of markup, each of these outputs may require their own script to handle the content (that is, the more distribution channels there are, the more scripts there are to maintain). Finally, the majority of systems that allow markup in this way do very little to limit the type of markup that is used. Because of the tremendous variance in how the markup is used in the content, these scripts will need to be increasingly complex, causing the accuracy to be tougher to guarantee.

Rather than handling the cleansing process on the way out, NPR has created a system that cleans the content on the way in. The goal here is to save the content in the database in a modular AND portable way. That means that each discrete object type is stored separately while ensuring that text content in each object is devoid of markup. I call this system “Markup Addressing” and here is how it works:

  • A range of fields in the system are markup-enabled, allowing Editors and scripts to include HTML and other markup values in the content directly.
  • For each field that allows markup, very specific values are allowed. Some fields allow more, some less, but all fields are limited to nothing more than the 25 tags and character encodings that the system as a whole allows.
  • We apply client-side handling to ensure that no markup beyond those allowed by the field are used for that field. We also enforce proper nesting and syntax for the markup.
  • Before saving the clean and acceptable markup to the database, we identify all markup for each field and begin our “addressing”, which is essentially identifying the character numbers of the markup in the text. For each tag or character identified, we find the character position for where it starts. If applicable, we also find the character position for the close tag. We then strip out the markup from the text and store in a relational table the address in the text that the markup was found.
  • This relational table does not include the markup itself. Rather, that is stored in a separate table that is the authority for which tags are allowable. The image below represents roughly how we store this kind of information.

NPR Flow of Markup Through Content


The diagram above represents how NPR strips out markup from content fields prior to saving to the database. The markup is then “addressed” and stored in a series of relational tables, enabling any presentation layer to present the content with or without markup. It even allows the markup to be easily transformed as needed before pushing to different platforms. (Click here for an enlargement of this diagram)

There are several very tangible benefits to this approach, all of which improve overall portability of the content. These benefits include:

  • Distributing the content without any markup is as simple as pushing out the content from the database directly, without any further processing. This is helpful for platforms that are unable to render markup, including those mentioned in my examples above.
  • Distributing the content with the original markup is just as easy by reassembling the markup based on the addresses.
  • It is easy to only distribute only some of the markup based on what the markup is. An example of this is if the destination product wants to emphasize content but does not want to allow for links to other content.
  • As markup, such as HTML tags, get deprecated, this approach only requires a change to one field in the entire database, instead of having to cycle through the database to find all instances of the old tag to replace it with the new one. For example, < b > has been replaced with < strong >, so we simply need to modify the one record in the authority table for tags to make this change apply across the entire set of content.
  • As new platforms arise, if they require specialized markup, it is easy to transform the existing markup to anything else required for these new platforms.
  • Adding new allowable tags is easy by simply extending the client-side handling and the authority table. These tags can include microformats and other business-critical tags that help describe the content. For example, NPR could very easily create a tag for our internal purposes for < station >, such that for every station that gets tagged, rather than rendering this tag, the system will look up the station in our database and replace that < station > tag with a hyperlink to the station’s home page.

NPR’s system applies these methods to specific fields throughout our CMS. When distributing the content through the API, however, we only currently apply the power of Markup Addressing to the story full text. The API has a field for < text > which removes all markup for the syndication as well as < textWithHtml > which reassembles the content with all markup. Extending this to all other markup-enabled fields would be quite easy under this system, although there has not yet been a need to do so.

Regardless of which approach is taken, there is one other significant issue that prevents true portability of content… the content itself!

I create a distinction between “content” and “calls-to-action” to help clarify this problem. Content is the information that the users actually want to consume. It could also include metadata, which helps to accurately describe the content that the user is actually consuming. Within this content, applying markup that emphasizes it or relates it to other content should be done in such a way that the meaning of the content is unaltered by the abstraction of the markup from the content. Here is an example of an appropriate way to apply markup to the content:

NPR Example of Good Markup in Text

This image is part of an NPR story that demonstrates appropriate use of HTML within the body of the text. The artists’ names are linking to artist pages, but the meaning of the story is completely unaltered by the removal of the markup.

In this scenario, removing the links to the artists’ names in the text, for example, does not alter the meaning of the content. Of course, it does diminish some of its power as the user cannot easily learn more about these artists within the context of this story. That said, distribution of this content without those links will not adversely affect the meaning of the story. The artist names are valid and appropriate within the body of the text.

Applying markup within the content that is calling the users to perform an action, on the other hand, poses a different problem. Here is an example of a call-to-action within the content:

NPR Example of Bad Markup in Content

This image is part of the same NPR story demonstrating the use of calls-to-action, which make the content unable to provide meaning without the context of the markup. These calls-to-action make the content less portable, specifically to platforms that are not markup enabled.

Notice that within this content there is a link to related content where the link text is “Listen to The Entire Album”. Abstracting away the link itself actually alters the meaning of the text as the text provides no information about the audio asset. There is no indication as to what album or who the artist is. So, as this content gets distributed to platforms (both known and unknown), pulling out the markup actually adversely affects the content.

This is a problem for every content producer, including NPR. Although we have gone through great measures to put the content in the best position to live and thrive in all platforms, there is still work to be done to ensure the success of our distribution strategies. Some of these efforts are technical in nature. Others could impact editorial processes and style guides. But in all cases, our goals are the same… to be a media organization that produces great content for our users, wherever they wish to consume it.

Panorama Theme by Themocracy