BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations BBC iPlayer: Architecting for TV

BBC iPlayer: Architecting for TV

Bookmarks
44:49

Summary

David Buckhurst talks about BBC iPlayer and explores the challenges of TV application development; from their early days chasing new native experiences, to the development of their open source libraries and standards-based certification. He also touches on the next steps for iPlayer as they blur the lines between broadcast and IP television.

Bio

David Buckhurst is an engineering manager at the BBC, where he looks after the teams who develop interactive TV applications (iPlayer and Red Button). He has been a vocal advocate of automated testing for years, having really seen the value of automation while developing emulator technology such as Apple’s Rosetta. He led the development of Hive CI, the BBC’s device testing cloud.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Buckhurst: My name is David Buckhurst. I'm an engineering manager for iPlayer BBC Design Engineering, which is the technology heart of the BBC. I look after 10 or so teams who build our big screen experiences, so not just iPlayer but any of our apps that run on set top boxes, TVs, games consoles. What's this talk about then? Hopefully you've read the abstract so it's not a total surprise. But this isn't the usual talk we do when we're talking about iPlayer. There are some great talks on the internet about video factory, about continuous delivery, about how we do our delivery pipeline. But I'm going to talk about TVs.

On and off for the last 8, 10 years I've been building video applications, particularly big screen TV experiences. This is a domain that I'm heavily invested in, and yet TV application development is largely a bit of a mystery. iPlayer TV and mobile teams are all based Salford, Media City, pictured. We saw a base for a big proportion of our design engineering teams, certainly the biggest proportion of our iPlayer people. A picture from outside our office. For a bit of context, a lot of content production happens at the BBC, particularly in Salford. So it's not uncommon for something totally random to be happening in the morning when you come in, like a jousting tournament.

It is very different from anywhere I've worked before which - it's always been software companies, tech companies about selling software and services. I'm also always asked to put more pictures of our offices in our talks, but really the primary feature of our offices is the sheer number of TVs we have and devices everywhere. We've got walls for monitoring. This is specifically our broadcast data player. We have store rooms that are full of TVs and we've got hundreds of TVs in offsite storage. If an audience member phones up and they've got a problem with a particular device, we can quickly retrieve it and test it to see if there's a problem there. We've got racks and racks of old set top boxes and things like that for testing our broadcast chain. Even if we find an unused corridor, we're pretty likely to just line it with TVs for testing. This did turn out to be a fire escape, so yes, I got in a little bit of trouble for that. Learn from your mistakes.

Why do we work in an office where the number of TVs outnumber humans 10 to 1? Well, there are two important factors to TV app development. One is the TV ecosystems themselves, which were a challenge, and we'll talk about that shortly. The second is our public service remit which, so pictured here is the Royal Charter which sets out our purpose. Central to the idea of what we are as a public service entity is this this concept of universality. There are 25 million homes in the UK and we're obliged to make sure that BBC content is available to as many of those homes as we can, which means we can't really target specific devices. We can't go after just the high end TVs that are easy to work with. We have to target as wide a range of devices as we can in order to make sure we hit as many homes as we can.

What’s a TV App?

So, what's a TV app then? Well, I'm assuming everyone knows how to build web applications, and developing the bar for mobile application development's really lowered these days. I mean, none of this is easy, but it's a known thing, whereas TV apps are still a bit of a mystery. While we have multiple web and mobile development teams all across the BBC sitting with production teams, etc., all TV development happens on one floor in Salford. So we build a number of different apps for all the different departments in the BBC.

There are three categories roughly of application as far as we think of it. There’s broadcast applications, there's JavaScript applications, and there's native applications. So broadcast applications, it's the first classification. These have been around for a while. They were developed in the '60s, launched in the mid-70s, if you remember Ceefax and Teletext. Not a lot of people are aware, but when you're watching a TV show on a BBC channel, there’s actually an application running in the background and that's running in MHEG, and it allows us to do things like expose red triggers.

MHEG, not to be confused with MPEG, stands for the Multimedia and Hypertext Standards Group, which is basically a platform and a programming language for allowing you to display interactive text and graphics. So things like this. The red button digital tech services are built in MHEG. It lends itself well to the idea of scenes and then mixing graphics and text together. Radio slates. See, if you go to a radio channel, they're built in MHEG. It is it is a legacy technology and the industry is moving away from it, but it's still a pretty major platform for us, although things like HPV TV are coming along to replace that.

MHEG is significant for a couple of reasons. One, we still use it a lot, and we'll talk about in a bit, but also the first implementation of TV iPlayer was built in MHEG. So iPlayer launched December 2007, initially as a desktop app, and there was a lot of controversy at the time, I recall. I wasn't working for the BBC at the time, but I remember there was a big argument about who should foot the bill as the internet was suddenly going to become this place for distributing video. But 2007 really marks the birth of internet video services, so there wasn't really any looking back. So this was the first TV iPlayer built in MHEG. It is a technological marvel. I'll happily talk about that if you join me in the pub later. Yes, not going there.

The second classification, JavaScript applications. TV started to feature basic HTML browsers supporting JavaScript. If you remember doing web development in the '90s, you've got the idea, right. These were ancient forks of open source browsers. They were all a nightmare to work with. No two browsers were the same. However, it was the direction the industry was going in, and we were about to get quite heavily invested in it. Picture this, iPlayer, so this is the JavaScript implementation. Yes, mostly applications these days on TVs are JavaScript apps running on browsers on TVs. We've got other apps. There's a sports app which sort of blend of news, live content, text content. Got a news app, which again, JavaScript but focused on tech stories. So, most of our development effort goes into building and improving the JavaScript apps and ecosystem. And really, that's what this talks about.

Very briefly I mentioned the native applications. So for example, the Apple TV application we've got here. So I'm not really going to go into these in any detail other than to say, we used to build a lot of bespoke native applications and it's not really the direction that was right for us. It's something we've been moving away from. Finally, it's worth mentioning that our integration with the platform isn't just about an app. For example, to promote your content on some platforms you have to expose feeds. For example, this is a Samsung homepage and we've got our recommended content appearing as a row. There is native integration, as well as those apps themselves.

Bet on TV

2012 which was the year I joined the BBC, was also the year of the London Olympics. That was the year that the BBC really decided to get behind TV experiences. The BBC's promise was to deliver coverage of every single sport from every single stadium, and so digital offerings were really going to play a major part in being able to do that. When I joined, there were 14 different iPlayer for TV code bases. We had custom Wii, Xbox apps, MHEG implementations, multiple HTML, JavaScript applications, action script. But that was pretty much the full time of the department, supporting all these different code bases. So this desire to build any more apps was just unsustainable. We started to focus more on a standards approach. So leveraging web standards, but for TV browsers. That would help us scale better for the audience, and that standard was HTML5 running JavaScript.

We published a specification that sets the bar for TV application experiences. We didn't just want to build to the lowest common denominator of a TV browser, but wanted a rich experience for everyone. We've got a spec and we publish that every year and update it depending on what the new capabilities we want to support are, and then we've got strict certification testing. This is a queue of TVs waiting to be certified, but basically manufacturers send us their TVs, then we certify to see whether iPlayer's performances is what we expect on those devices. o we test that it meets the spec.

There was also an ambition for other TV apps, so news and sport, etc. But with all TVs using different browsers, a big part of the iPlayer code base was the abstractions that allowed us to deal with all the different browser types that we had. We extracted that into something called TAL, which is our open source platform for building TV applications, which we released about five years ago. This quickly meant we could spin up teams to build new apps and it helped us scale. Developers wouldn't have to deal with abstracting the differences between all the different browsers.

What is TAL, then? TAL gives us two things. There are a bunch of abstractions, so things like video playback is exposed very differently on different devices, even things like pressing the up key is very different on different devices. This gives us this base with which to work and build all our other applications. Then the other part to TAL is there's a UI Builder that lets you use widgets and components to build up your UI. So yes. This let us spin up some teams. We were able to build some new application experiences without having to multiply a code bases by 14 times for each product. So pictured here is the four main apps from about five, six years ago. They do look quite similar, but that's more due to the designers sharing their designs, than anything that TAL really gave us because TAL was quite a low level abstraction.

But it was a big step forward for us, TAL. It's used by a number of different TV application developers. It also allows manufacturers to contribute back when their platforms change. And on rare occasions, I've hired developers who've worked on TAL. It definitely has paid off for us. But there are numerous challenges still. So as the number of apps grew and the number of devices grew, the approach didn't necessarily scale. It coped well with the device abstraction, but from a user perspective, and from a development perspective, there are issues.

From the user perspective, these were separate experiences. They weren't really a unified offering. It was very slow to switch between applications. From a developer perspective, we weren't really gaining much of the advantage of our shared layer of TAL because it was it was so low level that there was a lot of duplication of effort. So porting a bug to play back in iPlayer - we fix a bug in iPlayer, we'd still have to port that into a new sport red button. There was quite a lot of overhead, and also each of these apps has a totally different mechanism for launching. We somehow didn't really want to go into much in this talk, but if you think about it, TV apps, while there might be browsers, you can't just type something into a URL. There's a whole team I've got that is focused on how you launch into applications in the right place.

The TV Platform

There was also the reality that, I think, TV wasn't growing as an audience platform as much as we'd hoped. There was a lot of continued investment in something that we didn't know when it was going to pay off. And a lot of people still thought that mobile was the real bet. So, what we had, four fundamentally different code bases built on one platform. We decided to introduce another layer. The idea here was we rebuilt all of our applications as a single platform using config to drive out the differences between the product. For example, our sports app was just iPlayer but with some sports branding, a yellow background, but then with substantially different program data, it meant it was a totally different experience. So essentially, we had a monorepo. It was a single runtime, so a single deployable client, so all our applications with the same code just exposed with a different kind of startup parameter. Immediately, it eliminated a vast amount of duplication. Any changes we made to one product would immediately result in improvement in all the others. Talent upgrades only had to happen once and then the whole thing was config-driven.

I've captured here one of our technical principles. It's also our longest technical principle, but a really important one, which I think is key to how we've actually evolved our TV apps over the years. The principle SO1, which is the logic for our products and systems is executed in the most consistent runtime environment and available avoiding the need for runtime abstraction and logical duplication wherever possible. So really, the move from a platform model for the client meant we needed to do more work in the services, as we wanted to push as much logic out of that client as we could. So we introduced the backend-for-frontend architecture.

It's my really rough diagram of what that looks like, but essentially, we were pulling BBC data from all over the place. Our TAP client had a consistent idea of what the schema for its data should look like. What our backend-for-frontend architecture did was provide- we called them mountains, just because if we can name them after random things - basically allowed us to take that data from different parts of the BBC and then present it in a way that the client understood. That was that was great. We were moving to the cloud, we had the opportunity to modernize the way we worked to see a continuous delivery. I put this diagram up. I don't expect you to get anything out of it really, but it illustrates the complexity of the estate that we were dealing with, like that the sheer number of BBC system that we were pulling data over from everywhere, and then mangling it into a form that the client could deal with.

Slightly easier picture this one. But essentially, you can see we had some AWS services, some on-prem services, and then a real separation of our business concerns and the data logic. It let us move faster, all the good things from continuous delivery, from DevOps, allowing us to scale services independently, testability in isolation. But I don't want to cover that stuff too much.

TV Takes the Lead

It's round about this point when TV started to take the lead. When I started at BBC, TV apps were about 10% of the of the overall iPlayer usage, but we've reached a point today where 80% of all internet traffic is video content. Growth over the last few years has been absolutely phenomenal. This was the latest graph I could find for iPlayer usage where you can see it's just been consistent. It starts at 2009 here, and every year it grows and grows.

The more interesting graphic, at least from my perspective, is this one that shows the split between the different platforms. As I said, 10% was initially where TV was, and we're now at nearly 60% of overall iPlayer usage. So, certainly it's the premier platform. But we've been content with slowly evolving our offering, making sure that we are optimized for engineering efficiency, rather than the speed of change that the BBC was now asking us to look at. So BBC was moving towards signed in experiences that would require a major reworking of our applications, and we were getting a lot more content. As iPlayer shifted from a catch up experience to a destination on its own, we were going to have a lot more stuff to show.

We had this large, monolithic front end, that was slow to iterate on, and it was quite difficult to work on. While the backends were quite pleasant in the modern JavaScript, we had a frontend that no one really wanted to work on. So everyone said, "Look, we just want to do react." Then in addition to this, we got more and more reports of iPlayer crashing, because as this client was getting bigger and bigger, what we found was the devices weren't quite coping with it. So it became clear that we were at the verge of the biggest change to how we were going to do TV application development. Could we get something like react working on browsers?

We had a lot of preconceptions about what was and what wasn't possible from years and years of dealing with devices that never really performed. We threat set ourselves three rules. Where possible, we use off-the-shelf libraries, performance must be improved, and we can't start from scratch. iPlayer was far too mature product to just start with a Greenfield.

Two things that we had to understand about the devices we're building for if we wanted to make a major change like this, where we're pretty much changing everything, capability and performance. The first one, capability, it wasn't really an option to go and get every single device we had out of storage and start seeing which flavors of JavaScript worked with them. We had to devise a mechanism whereby we could experiment and find out what the actual devices that people were using were doing. What we did was when iPlayer loads, there's a small little try block where we can execute a tiny bit of arbitrary JavaScript. So what that let us do was run a small bit of React or a web pack code or whatever it might be, to see what the compatibility would be like on all the devices out there, because we still didn't know if these kind of technologies would work or not.

Actually, the finding was that pretty much most of the devices out there would support React. It was only about 5% that couldn't typically if you're not supporting something like an object-defined property. Then we had some unknowns, where one day the devices would be reporting they could and one day they'd be reporting they couldn't, which we assumed was due to changes to the firmware on those devices that meant they supported for recent versions of JavaScript. So, we knew it was it was possible, but the bigger challenge was around performance. We were already hitting performance issues with our custom built framework, and specifically memory. We had a number of TVs that we could use to do debug memory profiling, but we didn't really have any good way of doing it in a reliable or repeatable, automated way that we could actually get behind.

This is the rig we built for doing memory profiling. We did use a bunch of different devices, but largely because FI TVs are Android TV devices, they're easy to reboot and get consistent results. We plugged a whole load of these into the hive, which is our device testing farm, and we managed to get some consistent graphs out that showed us the memory use. You can ignore the actual numbers, because it’s the difference between the numbers that's more significant. So we started to see these really interesting, interesting graphs of memory performance.

And there was this particular kind of signature that we were seeing as you navigated through all of our applications. At point A, that's the point when someone had navigated to a new menu item, and therefore the whole process of requesting some new JSON building up the UI or changing whatever that experience on the screen was would happen. Then it sort of flattens, and then at point B, we'd see that memory drop down again. So our assumption at this point was that the kind of the memory overhead with building those UI components with parse in the JSON was causing the memory increase, and then at point B, the garbage collector was kicking in.

So we did the obvious thing. There was a memory leak. Over a minute period, we had this consistent line upwards, that stuff was easy to fix. But what we started doing was looking at builds from the last 12 months to really get an idea of was there a correlation between features we built, the code we were writing, and the memory usage? That was quite startling. One of the one of the main culprits was thisDEF, which shows the memory usage before and after what we called the purple rebranding. This was when we introduced the purple shards and things in the background. But basically, we were moving to very large images, background images, the sort of things that devices we were working with didn't really like.

Another big change and culprit for memory bloat was just the sheer number of images on the display. Over the years, the amount of content in iPlayer has gone up and up and therefore we've had far more images being loaded. So that was one of the biggest changes to memory use. And then one of the massive ones we saw was- so with iPlayer, if you're watching a video, you can go back and navigate the menus and the video's playing in the background. That actually was one of the biggest causes of devices crashing. We saw that really chewed up a lot of memory.

Our findings are summarized here. Number of images, processing of JSON and constructing of the UI, video in the background and going purple were the four main causes. What was clear, though, is that certainly the way that we were building UI wasn't that different from React. So the idea of having React on device, it wasn't really going to solve any of the performance issues that we had. What we've got is a fairly painful memory intensive process where we passed the JSON that the client requests, we interpret what that means and work out what the UI should look like, and then build up a DOM, and change that out in the screen. React turned out to be pretty equivalent in its memory usage.

Our real preference was to remove that whole UI Builder capability all together. And so we basically moved to the idea of using server side render wherever we could. This meant that we have a hybrid app where some of the logic is in the client, but then a lot of it is HTML, it's JavaScript chunks, it's CSS built by the back end and then swapped out in the right places on the client. Performance massively increased, our lower performing devices loved it. Memory Usage really went down. The garbage collection seem to kick in and managed better with that. There was less need for the TAL abstractions. And importantly, we had way less logic in the front end, so it was much easier to test and reason with that.

This is before and after. We reduced memory usage quite significantly despite actually tripling the number of images and the complexity of the UI that we had. We were using significantly less memory, performance was better and we were motoring again.

Learning from Failure

I think the biggest change for us in iPlayer, and I think this is a common story everywhere, is taking on operational responsibility for our system. We literally went from a world where we handed over RPMs to an ops team, they managed any complaints, they managed scaling and then suddenly, we had to learn like all of this. So basically, we had every lesson about scaling and dealing with failure to learn from. This is why I love working for the BBC, is this constant learning culture. We were no longer shield it from the daily barrage of audience complaints, and so we were just inundated with problems. Apps failing to load, being slow on some devices.

But in a lot of cases, when we investigated a complaint, it might turn out to be a Wi-Fi problem or a problem with a particular device. We realized we really didn't have a good grasp of our domain at all. Of course, the only way to really improve something with confidence is to be able to measure the problem. We went about building a telemetry system that could give us real time insight into what was actually happening on our estate. This is actually last week's launch stats. Our telemetry system basically lets us put checkpoints at all stages of the app loading. So there are lots of different routes into the app. You can press red, press green, you can launch through an app store. This lets us break down by model, by device, by range, so that if we've got complaints or if we're seeing problems, we can go and look at the stats and see if it's a localized problem or if it's a general problem.

That really allowed us to tackle what the real problems were, and which they weren't. This is another really interesting thing from the launch stats, is the pattern of usage we have. Because we largely serve the UK, there's no one using it at night. When people get up in the morning, you get this little hump just as people are watching whatever they watch in the morning. But then come eight o'clock at night, we get these massive peaks as everyone jumps on to iPlayer. Again, we see it a lot on program boundaries. People finish watching whatever it is they're watching on TV, realize they don't want to watch the next thing and so they put iPlayer on. We can pretty much tell every half an hour or every hour, we get these spikes as people decide they're going to launch into iPlayer experience.

Despite that general pattern which does allow us to scale, so we really scale up in the evenings. We can get some real unpredictability, like this spike, which was nearly a million launches a minute that hit us back last year. So we're very sensitive to things like continuity announcers telling the viewer that they can watch more content on iPlayer. So yes, if they say something like, "Oh, yes, you can watch the rest of this show on iPlayer," we'll suddenly get a million people trying to tune into that. So, our approach has been that we do scale up very aggressively at peak times, but also we've had to engage with our editorial colleagues and be part of that promotion strategy. We need to know when you're going to tell people to use our systems if you're doing it after 8 million people have watched something. But also, we can help them better understand how successful their promotions are. So it is win-win.

Another key strategy for dealing with the TV domain - almost everything in iPlayer has a built in toggle and a lot of this is driven by device capabilities, so device support live restart or how many images could a device realistically cope with. This is our toggle for turning off the heartbeat that we send back during the video play. But yes. The complexity of the TAP ecosystem increased with data arriving from all sorts of places across the BBC, so it wasn't really good enough to assume that everything worked. Building in toggles became really, really important. We use them for testing rollout of new features, we'll expose new features in the live system and then turn it on for testing. We can use them for operational toggles when we know there's a problem. And once we get happier with them, we can use them for automated fail back degraded behavior.

A great example of this is during live events when we get really high volumes of traffic. For example, the World Cup last year. We were streaming UHD. It was our big live UHD trial. We knew we might have a problem with distribution capacity. This graph shows our top end bit rate for HD streams, about five megabits per second, for UHD that goes up to about 22 megabits per second, and then live UHD is about 36 megabits per second. So, the problem with live content is it pretty much needs to be stream simultaneously with the live event. Encoders aren't quite mature enough yet to achieve the level of compression that we'd need to really bring that down. I think we worked out that if every compatible device in the UK tried to watch the UHD streaming concurrently, then the UK internet wouldn't actually have the distribution capacity to deal with it. So, we derived this limit of 60,000 live streams which was the limit to which we could cope for the World Cup.

We had to build something called our counting service, which allows us to get you pretty much real time insight into particular metrics we care about. In this case, it's how many UHD streams have we got? We did actually hit the cap for a couple of times in the World Cup. One was for the England vs Sweden game. So, new viewers coming in attempting to watch UHD were only presented with a HD stream. That said, the worst did still happen. We were overloaded by traffic turning up to watch the last few minutes of the match. And there's nothing like being on the front page of the Metro to make you learn fast from failure.

While it's a, only a single deployable app at any one time, we can have multiple versions of the client talking to our backend systems. We've used version schemas a lot to be really able to make those changes with confidence. So what can happen is someone is using the client, they're watching a video. We make a change. Someone launches, they're using a new version of the client. If there's any schema changes to the API using the back end or front end, you're going to get a problem at that point. So, schemas are great and that we can use them to generate test data that protects us from blind spots in our testing. We can also use them to check live data to make sure that our upstreams are conforming to the schemas that we've agreed, and that there's nothing unexpected happening.

We also realized that the idea of a pure server side rendered approach wouldn't provide the quality of experience that perhaps the audience expected. It was my belief that the client could essentially be a browser within a browser. But we needed to be a lot more than that. There was a lot more resilience required to be reactive to back end problems. A great example of how we manage upstream issues comes from this example which is how we degrade our playback controls. So this is, I guess, the part of the app that people spend most of their time in and you've got various control options. You've also got suggested onward journeys, recommended content to watch the peaks up at the bottom there. You've got an added button for adding it to your user favorites.

There's also Metadata. So if any of those systems are having any kind of scaling problems, we can basically turn off any of those things and they'll actually just scale back. So in fact, you can get to a point where you're happily watching "Bodyguard" and all the rest of the BBC systems are down, well you've still got some very, very basic controls that are baked into the client. So that mix of the server side rich functionality and the onward journeys and everything, and then just the simple journeys that we look after on in the client code itself. When we rolled this out, my wife, she complained to me. She was like, "I can't find a fast forward button." I was thinking, "Oh, that's brilliant because if it was this time last week, you'd be complaining that the video had stopped playing." So it was a quite reaffirming, that.

What’s Next?

What's next then? I think one of the most interesting opportunities for the BBC, being a broadcaster, is leveraging the world of broadcast technology. And it brings this talk full circle to what I was talking about MHEG at the beginning. But there's this this junction where our MHEG world of broadcasting and our JavaScript world of IP join. I don't know if you've seen the green button triggers, but these are graphics that we broadcast that are displayed by the MHEG app that are timed to only be displayed at the right points in the program where you've come in and missed the beginning.

There's quite a lot of logic and complexity going on there, but there's even more to it than that because the EDC check, you've got internet access and use to check, you can actually launch iPlayer on that device. But the big thing though, and a big change from a broadcast experience is that suddenly in the broadcast domain, you have to understand what load that the client's actually going to encounter. So the World Cup UHD problem, even if only a small percentage of the potentially millions of people watching an episode of "Bodyguard" decides to press green to watch it from the start, that we could have real capacity issues there. It's very, very hard to tell how many people follow those triggers.

Another journey we've been playing with is this one, which is press red to watch the rest of the episodes on the box set. Again, it's impossible to know how popular a program's going to be, how many people actually want to binge this thing. So we have to tie these things into our counting service so that we can't go over capacity. We have to be able to scale in advance of these kind of triggers, otherwise we can have really big problems, and we can suppress the triggers if we need to.

I think these broadcast to IP journeys are really interesting for me, because they do represent this union of two very different worlds. There are two very different priorities. The challenges are very different. The development practices are very different. And while they both serve millions of viewers, they do so in very different ways. For example, broadcast, you might think it's got millions of people watching it, but effectively the load is one, so that data is being played out higher. It's being broadcast out. Whereas IP, you've got millions of people directly connecting to have that personalized data experience. Availability typically measure in broadcast five to seven nines, IP, you're lucky really, if you're talking in three nines. Data-wise, the broadcast has very much a push model where things change and you push it out to the user and it gets broadcast to their box. Whereas IP, very much a pull model, you want the latest live data. Security, I mean, broadcast is amazing. Everything's triply locked down and triply encrypted, whereas IP, they run tracks at conferences on security for IP systems.

And then even the approach to risk, like broadcast there's this great mentality of well, it'll never fail and everything's triply redundant, and there's extra data centers everywhere and nothing can ever fall over, or if it falls over, there's always another entirely expensive data center or broadcast tower to play it out. Whereas IP, the philosophy is very much about learning from failure and fail fast. So, I think there's a lot that these worlds can learn from each other, which is what makes that triggering work really, really interesting. Our broadcast estate has to have kind of operational triggers that monitor live load, and our IP estate has been challenged to think about resilience and push models, and what can we borrow from broadcasts that makes sense in the IP world?

I think that clash captures the real opportunity that the BBC has, now as audiences make this transition from broadcast to IP. I wanted to end on this quote from our CTO. He said that, "We need these attributes of broadcasting to be carried over to the digital age and should have the ambition for them to be amplified by the creative potential of the internet." So, he was referring to the qualities of broadcast experiences, so things like quality, breadth, universality. But I think it's equally true for broadcast technologies. And for engineers, we need to learn from the platforms that have come before us.

Questions & Answers

Participant 1: What sort of lifetime are you planning for supporting a TV for?

Buckhurst: We typically support them for about eight years. That's what we aim for. The spec evolves every year and we do our best to keep devices running for as long as possible. But that does reach a point where the audience on a particular device range is not significant enough. To keep it going can become very costly. It depends on what the stack is and what the capabilities are. I think also the reality these days is people can buy quite inexpensive sticks and things to plug in and upgrade their TVs.

Participant 2: Was there a particular reason that you guys decided to have a server side rendered JavaScript onto the TV, instead of pushing the application to TV? Something like a hybrid package just shipped to the TV that stays there?

Buckhurst: There are a few things. One, is as much as possible we wanted to turn TV application development into web development, right? We could use React's backend. We could we could just bring in web developers who'd feel comfortable working with that. But then they own a small chunk of the application. And then the clients there to glue all the parts together and keep the resiliency there. So some teams only really deal with the backend services and they do server side, and then more about solving the business challenges of iPlayer. Some teams are a bit more focused on the client. And then some teams have a mix of both where it makes sense.

Participant 3: Thank you for the talk. Is there a way of using emulators instead of physical TVs, or are those just for those high end [inaudible 00:43:03]

Buckhurst: We have played with emulators in the past. You don't typically get many of these days and they tend to be sort of the same level of experience we'd see on the TV's. We also prefer using retail models of televisions, so we know we're actually dealing with the real user experience, because quite often the debug versions don't really represent the retail versions. But yes, certainly these days we don't really use any emulators.

Participant 4: Thanks for the presentation. I noticed the "Game of Thrones" reference inside the mountains on one of the slides as well. Most comical game, which I thought was quite nice. You mentioned about you'd have different clients needing different schema versions. I'm just wondering how you handle that at runtime. Do you have redundant systems that have the old data schema or is it only additive? Do you only add new ...?

Buckhurst: We have tests that can run against the client or the backend, and the schema tests, they basically have a version of the schema that they'll run. So, I guess, if there are 10 different clients out there, it should be running 10 different versions of those tests. Years back we used to have a lot of devices we'd have to hold back and they'd have to run on older versions of the client, particularly as everything was client side. So these days it's pretty much just, has someone just left iPlayer on for two days if they're going to be on really old version?

 

See more presentations with transcripts

 

Recorded at:

Apr 24, 2019

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT