Transcript
Luu: My name is Christopher Luu. I do work at Netflix. I work on the growth engineering team. What that means is that we work on the customer lifecycle, anything that happens for Netflix before the user is a member, while the user is a member, if they subsequently cancel and become a non-member, and everything in between. This talk is about how we decided to approach server-driven UI, not just for mobile, but for all of our platforms. The problems that we were encountering that caused us to even go down this path. A little bit about what are the problems that we encountered, what are the challenges that we faced? All that kind of stuff, typical stuff.
What is Server-Driven UI (SDUI)?
What is server-driven UI, SDUI? It is a UI driven by the server. We’re going to talk about the spectrum of SDUI. What exactly does it mean to be on the spectrum here of server-driven UI? All the way on one side, we got something that’s really not very server-driven, like a completely offline app might be. You could think like a calculator app or something like that. It doesn’t ever need to talk to the server unless it’s stealing your data, which they probably are. All the way to the other side, you can have something that is extremely server-driven like a WebView. Something that is driven entirely by a server, where it’s sending HTML and JavaScript and CSS, and it renders it. That’s the other side of it.
Then everything in between in the spectrum. You’ve got the RESTful API. You’ve got GraphQL APIs. You even got SOAP APIs, for some reason. I think pretty much all of us probably do some form of server-driven UI. That’s the point. Even at Netflix, before we embarked on this journey, we were doing server-driven UI. We had all sorts of different protocols. You may have heard of Moneyball, or Falcor, or some of our GraphQL experimentation. All of these are server-driven UI, because the server is telling each of the clients to drive the UI to do something, to show something. For this particular case, we wanted to get a little bit closer to all the way on the other side of that, which is right there. Maybe not right there, I just put it there. We wanted to be server-driven for this particular problem.
Pros and Cons of Server-Driven
What are some of the pros of being more server-driven UI, in my opinion? One of the big pros here is this first bullet, the updating of the UI without a client update, because the server itself is what is driving the UI. What is so key about that, especially for our mobile applications, is that we can update the UI without needing to submit it to Apple or to Google, or for our TV, they can just refresh and get the latest version. That’s pretty cool for us, especially as we’re trying to do a lot of experimentation. We’re also able to share a little bit more code and logic across all the different platforms that we’re dealing with. Netflix is on a lot of different devices. With all of these different platforms, we’re really interested in being able to share some of that logic so not every single client has to reimplement the same networking code and the same parsing logic and all that stuff.
Netflix is a very A/B heavy company, and so being able to iterate fast on different A/B tests is a great benefit for us. That’s one of the big pros that we’re looking for here. This other point is interesting to us, where developers can work on other platforms as well. What that means is that even if I have no idea what mobile development is like, I’ve never touched Xcode or Android Studio or anything like that, I may actually still be able to create a UI from the server and have it render with native UI elements. That’s a pretty cool prospect for us, especially as we’re trying to juggle all of the different developers that we have and try to figure out which projects they could be assigned to.
Of course, there are cons. What are some of these cons that we have to consider here? We’ve got the higher upfront cost to architect. We’re dealing with not just a simple RESTful API anymore, but we have a little bit more to deal with in order to, upfront, be able to architect exactly what we’re going to be able to drive from the server. There’s this complicated backwards compatibility concern. We’ve got an app that’s out in the App Store, maybe the user doesn’t update it for a while. How do we deal with that? There’s this non-native UI development.
What that means is that all of us engineers, we pick the particular platform that we’re developing for, probably because it’s delightful to us. We like SwiftUI development. We like Jetpack Compose. We like Angular and React. We like these particular ones. If we’re changing it so that now we’re having to do the development of a UI from the server, that’s not necessarily the best thing. It’s also harder to support offline apps. If, by necessity, the UI of an application needs to be driven by the server, if it can’t connect to the server, then what are we going to do? These are problems that we have to deal with. It’s also harder to debug. You’ve got more layers of abstraction in between the client, all the way up to where the UI is being driven.
What Were We Trying to Solve?
What were we as Netflix on the growth team trying to solve here? I’m going to tell you a little bit of a story. It’s a story that got us to this point. We’re going to be talking about UMA. Of course, at Netflix, and I’m sure at a lot of other companies, we have a lot of different acronyms. UMA is the Universal Messaging Alert. This is a generic UI that we were trying to drive across all of our platforms in order to notify the user of something. They may have a payment failure, and we want to be able to rectify that so that they can go on streaming our lovely service. There may be a promotional aspect where we want to drive the user to certain parts of the app. Maybe there’s a new feature that we want to exploit for them, something like that. There are all these different kinds of alerts.
Our messaging partners have created this lovely template of different kinds of alerts that we can display to the user. Here, just as a very simple alert, it’s got a title, a body, some CTAs. You might imagine a JSON payload driving that being something on the left there, with the title, body, the CTAs, and it tells it to do something. It’s really cute and cuddly. It’s not that complicated. All of our platforms can implement it pretty easily. I might call that UMA-kun. It’s nothing to be concerned about. Let’s pretend we were actually trying to iterate on this a bit. If we were trying to actually create an API for this, you might have a title, it’s a string, easy enough. We’ve got a body. It’s a formatted string because we know copywriters, they want to put some bold, or they might want to link to a privacy policy or something like that, so we got to make it formatted. Big deal. We got the CTAs.
We do need to have the ability to block the user in case it’s something really legal specific. They need to accept some new terms of use, or something like that. We need the ability to say whether they can close it without using one of the CTAs. The design team came and say, “Actually, we want to try something like this. This is going to be really awesome”. That means, now there’s some eyebrow text. That’s the text that goes above the title. Sure, we can add that. Now it looks like you center aligned the content instead of what it was before, which is left aligned, so now we got to add some text alignment. Background color, it’s not the same color as before. There’s a header image now, this banner image.
Sometimes they might want an icon instead of an image. Maybe like a warning sign or something like that. Then there might be a footer, if it’s really legal. They need to add some copy beneath the CTAs. Maybe there needs to be another background image in case they don’t just want to use a color, they got to use some gradients or something beautiful in the background. Maybe there needs to be text color that changes, depending on the brand and what is displayed in the background, could even need some secondary CTAs. You all get the picture.
UMA-kun rapidly turned into UMA-sama. This got to the point where we were trying to deal with all of these different requirements, and not only were there different requirements where this particular once simple API shifted and became more complicated. We also have different platforms that we have to deal with. The four main innovation platforms currently on Netflix are TV, web, iOS, and Android. TV, we do have a special platform that means that we don’t have to do a very specific UI for every single TV brand out there. We have our own platform that allows us to evolve those relatively quickly. For iOS and Android, specifically, we have this long tail version issue.
This means that, because of how quickly we release app versions, there are likely going to be a good number of users who don’t necessarily always update to the latest version. Not only that, iOS might drop support for a certain device, like the latest iPad, or iPads four years ago may not run necessarily the latest version of iOS. Someone gets an iPad from their parents or something like that, and all of a sudden it just stops working with Netflix. That wouldn’t be great. We can’t drop support for those. We have to consider all of these different things.
Let’s say, if we’re going back to UMA-sama, and we’re like, now iOS happened to introduce the footer field in version XYZ, but TV didn’t implement it until this certain date. Web is ok. Web’s generally ephemeral. You refresh the page, you get the latest version. Maybe that’s not as big of a deal. Android, they’re slackers. They haven’t implemented it yet, whatever. The point is, the backend now has to figure out, can I even send this message that has this particular field in it? How does it deal with that? This is one of the key problems that we were trying to solve as we just explored server-driven UI. It’s worth mentioning that this also does exist on the TV side as well, like our older TVs, something that was sold many years ago may not be able to support the latest version of our platform, so we also had to deal with this problem there as well.
Not only that, many of our interstitials and things are multi-step. We’ve got this first message that appears, maybe it’s bugging the user to enter their phone number, but then we also have to verify their phone number and then show them some toast that lets them know that they were able to enter it properly. We deal with a lot of microservices at Netflix, which means that likely the first screen might be driven by one server, and then the client has to do completely other integration with some very other service in order to populate the next couple screens and be able to submit them.
Possible Solutions
What are some possible solutions? We could do a WebView. That was all the way on one side of this server-driven spectrum. If you remember, the design of UMA-kun was an interstitial, it’s something that pops up. I personally have never seen a WebView that looks very good, especially in the context of presenting an interstitial. We might have to take over the user screen or something like that. That’s just not great. What if we just evolved the template? Now that we know all of the properties that UMA-sama has, can we just learn from that? That is our new API, and then just set it from day one, and then now we’ll never evolve it again. Yes, no one believes that.
We, of course, looked at server-driven UI, and we looked at something that we call CLCS, we call it CLCS for short. It does not stand for Christopher Luu’s Component System. It stands for the Customer Lifecycle Component System. It’s designed to be able to drive all of these different kinds of interstitials relevant to dealing with everything from the customer lifecycle.
CLCS (Customer Lifecycle Component System)
CLCS in general, is a wrapper around our design system. This is really important for us, because I am a lazy engineer, I do not want to have to go and implement all of the UI components necessary to build a server-driven UI. I, as a lazy engineer, am going to be able to utilize all of the lovely work being done on all of our platforms to adopt our new design system. This allowed us to move much more rapidly, because we already had all these kinds of buttons and components and things that we could use. It supports multi-step interstitials because it completely abstracts away all of the backend logic away from the client.
Now the client deals with one little middleware layer that then reaches out to all of the other microservices and figures out, I’ve got a message. Here’s the message. It’s displayed in some template, something like that. I’m going to turn it into this server-driven UI payload, give it to the client. The client sees, great, I’m going to render it. I’m going to collect some user input. Maybe it’s a form, or like that collecting phone number interstitial I showed you before. Collect all that data, send it back to the server. Server says, this means that you want to render the next screen. Here’s the next screen, and on it goes.
That design system at Netflix is called Hawkins. Basically, Hawkins consumer design system is how we try to establish a branding guideline across all of our applications, across all of our platforms, into a set of typography tokens, components like the button, or checkbox, or inputs, any of the various kinds of components that you might want to build, UIs. Also, colors, like surface colors, foreground colors, borders, all that stuff. Because this Hawkins design system was very much in progress when we started the server-driven UI, we were able to leverage all of that and essentially just wrap it up in a nice little server-driven UI bundle and deliver that to the client. Then the client could basically utilize all of these 100% native components built in what other UI frameworks are native to that platform.
On iOS, that might be SwiftUI. On Android that might be Jetpack Compose, or it could be the legacy XML, it could be Flutter, it could be React native. We don’t really care. All we care about is that now all of these design system components are being implemented on each platform, and we’re able to utilize these. Not only that, they already established the certain levers, because one of the key pieces about going down this route that we wanted to do was to not reinvent a browser. We did not want to just reinvent HTML because otherwise why not just send the WebView? It was very important to us to pick specific things about each of the components that we wanted to be able to drive from the server.
On the left side, you might see all the different kinds of buttons there. Those are the exact levers that we provide through the server-driven UI to customize the way that a button looks. We don’t let you specify a height or an arbitrary width or anything like that. We utilize the particular levers that the Hawkins design system was able to provide for us. Similarly, we don’t allow you to just change the foreground color, because that’s where it starts to eke into being more like a browser, which we just did not want to do.
CLCS at a whole is basically components, fields, and effects. The components are all of those building blocks, buttons, inputs, checkboxes, all that stuff. Then, we basically glue all those things together with stacks, essentially vertical stacks, horizontal stacks, or because of web, more responsive stacks. Stacks that might want to be horizontal if I’ve got enough room there, or might want to be vertical if I don’t have enough room. We’ve got fields, this is how we actually collect user input. This is maybe a string, a Boolean number, depending on the kind of data that we want to collect for a user. Then the effects.
These are what actually happens when the user interacts with the particular component, so when the user taps on a button, it might dismiss the interstitial, or submit for the next screen, or log something, something like that. This is what it actually can look like. Here I was trying to fit every single possible UI platform that Netflix supports, which is the web, TV, iOS, and Android on one screen. It honestly didn’t fit. I had to try to jam it as much as possible. You see, there’s an iPhone, there’s an Android phone, there’s an Android tablet, there’s an iPad, there’s a web screen jammed in the corner there. There’s a TV emulator on the bottom right. They’re all displaying the exact same payload here, which is a complicated one. It’s got three cards here.
Each one has a banner image and the title and body. For smaller screens, they get laid out vertically. For larger screens, they get laid out in a nice little horizontal stack. This is all being driven by a single payload across all the four major platforms.
Let’s take a little bit of a closer look at a CLCS screen. Here we’ve got one of the more typical UMA-kuns. We’ve got an image at the top. It’s an icon in this particular case. We got some title. We got a body. We got some CTAs. We’ve got the footer. Essentially, we were able to break that down as essentially one modal that contains a vertical stack. You can see that, it’s pretty vertically oriented there. The first element of that vertical stack might be a horizontal stack that includes a single child that is center aligned. The next one is just the title text. The next one is just the body text. Those are text elements that have specific typography tokens that tell them to render with a specific font and weight. Then we’ve got the CTAs. The CTAs is actually going to be in one of those responsive stacks that I mentioned, which means that on a larger screen, those might want to be displayed next to each other, whereas on a smaller screen, vertically.
Finally, the footer text at the bottom. What could that actually look like in the CLCS code that we generate? It could look like this. It does look like this. Basically, because a lot of our engineers at Netflix, and especially on the growth team, are web engineers or TV engineers, they’re very familiar with React programming. We utilize JSX and TSX to essentially allow us to author the UI like this from the server. You can see, it’s all of those elements that I mentioned before. It’s got that modal. It’s got that vertical stack, the horizontal stack with the content justification as center. Because of this, we’re able to actually generate this UI. It turns it into the payload that each of the clients expect, and then those clients take it off to the races, and they’re actually able to render it as you saw on the other screen before.
Backwards Compatibility
Mr. Netflix man, you were talking about backwards compatibility. This is a key problem that we have to deal with. Because there’s all those folks that might not have updated to the latest version. What if you create some new components that cannot be rendered in the old version that someone might be running, what do we do there? Or, how do you make sure that you’re not sending something that would just completely make the app crash, or something like that. We actually rely a lot on GraphQL. This is part of our transition to GraphQL at Netflix. We do a lot with the built-in safety features of GraphQL that allows us to ensure that there are no breaking schema changes. It allows us to also fall back on components using what we call request introspection.
Then we can also basically know that they are a set of baseline components that are implemented on all the different platforms, at a minimum, if they support the CLCS spec. What do I mean by GraphQL features? The way that GraphQL works is that it’s essentially when you are trying to query for something, you might create a fragment. Here’s an example of a fragment on a button component. Here, with GraphQL, you have to specify every single property that you want to get, and maybe expand on that with another fragment, something like that. Here we might be getting the accessibility identifier, the label, the button size, type, icon, onPress. Because of the way that GraphQL works, if we were to remove the label property altogether, all of our deployment scripts, our CI/CD, everything, would yell at us and say, “You can’t do that. That’s a breaking schema change”.
We also have a lot of observability so that we can see exactly how often this particular property is being utilized. Maybe in the future, if we do decide to deprecate something, we could take a look at that and safely remove it. It makes it very hard. You have to force it to actually remove something like that. That’s one of the nice things. We don’t have to worry about it accidentally removing something that hundreds of clients are still being used.
The component fallback thing is an interesting one, though. Let’s pretend we have this overarching fragment called ComponentFragment, it spreads over the entire interface for CLCS component. Every component that exists belongs in this one huge switch statement. Eventually, I don’t show it here, but you might assume that there’s the button fragment referenced here, or the text fragment, or the checkbox fragment, all that stuff. Let’s pretend there’s this fancy new label that we want to introduce for some reason.
The FancyLabel is now added to this ComponentFragment, and there’s a fragment for it. That’s all well and good. When we actually define how this particular component is rendered or is put together, we define a component in our CLCS backend that essentially is able to take in a fallback. What that fallback does is it essentially takes all the properties that belong to that FancyLabel. Let’s pretend it’s these key, label, color, typography, all that stuff, and it’s able to then say, what are you going to do if this particular component just does not exist? If the user did not request it. The user’s not spreading on it. It has no reference to this, because of the way that GraphQL works, and every request has to basically reference all of the things that a particular client supports, that is our tell. It says, “They didn’t request the FancyLabel component, so I need to fall back in this case”.
In this case, we could fall back to a less fancy label, just give them a text, great, whatever. They’re using an old version, maybe they don’t need to see the latest stuff. We could also potentially do other things. We could send them a button that says, “Sorry, you got to update to the latest version. Here’s a way to get to the app store”, something like that. It’s completely arbitrary, made up, but there are options here. This is a way that we can actually fall back, which is key, because we do want to continue to evolve CLCS. We don’t want to just stay at the set of baseline components.
How does UI development actually look like in CLCS? We’re going to explore something that I call templating and demoing. Essentially, you might assume that we have this function that says, MyLovelyScreen template. It takes in these options, let’s say a title and a date, and it’s able to render it. This is a very pared down version of what this payload might look like. You might assume it’s got this modal with a stack of these two text elements, or something like that. This is our template function. When I actually want to render this, let’s say there’s a backend that I want to integrate with, you might have the MyLovelyScreen function, which promises to return a screen. In that case, it reaches out to the backend, it fetches the data, it’s able to get that.
Then it calls that MyLovelyScreen template, so that it can actually render it in the way that it’s supposed to. At the same time, we can also create a completely mocked version of this, like a demo. In this case, it’s passing in a title and a date that’s completely arbitrary. It’s made up. It’s basically utilizing the exact same rendering path. It’s going to be rendering that same screen here. Because it’s utilizing the exact same thing, we can use this for a lot of things, actually, these demos. We can use this for our automated testing strategy. These demos, because they render in the exact same way that the more server, actual backend driven version looks like, that means that we’re able to run all sorts of different kinds of screenshot tests against them. The client is now essentially forced into basically just being a rendering engine and a user input collection engine.
Now, if we were wanting to set up these tests where the client is getting this demo payload. For all intents and purposes, it looks exactly like the real payload. We’re able to take screenshot tests there and ensure that there are no regressions introduced if something changes in the client renderer, or something changes in the backend, or something like that. That’s pretty cool, because in the past, we have been pretty reliant on end-to-end tests at Netflix, which means that we have to set up a user. We have to create this fake user. We have to set it up in this particular state to get this particular message, get to this particular screen to actually render this particular screen. It was a pain. It’s faulty. There are so many points of failure.
Now, because we’re able to go and get just this demo and say, “Backend, give me a demo of this particular screen”. We’re able to bypass all that, and just get the logic that we want and show the particular payload. Not only that, it’s super helpful for localization quality control. We support a lot of different languages in our app, and in order to support our localization teams, we take screenshots of everything and say, does this screen look ok in your language?
In the past, with those end-to-end tests, they were a massive pain to try and get the user in all these different states and ensure that they work, whereas now all it has to do is make a single call, get a demo, come back and render it with that particular language. We’re able to do these client integration tests, so we had build up that much more confidence about how CLCS is implemented. These are tests that maybe challenge how the effects are implemented. Or, they’re hyper-focused on each particular effect, so that we can ensure that when we ask it to do something, it’s doing the right thing. We’re also, from the backend side, able to take a lot of snapshot tests with our templates.
I mentioned the template before with the MyLovelyScreen, we’re now able to take actual snapshots of the output, and when that changes, then we know, “We know that you updated this template to add some new field or something like that”. Since we have a snapshot, it says, this change, is that right? Is that what we’re expecting? Allowing us to have much more granular confidence in the actual templates that we’ve created. What about this end-to-end test? Those are still important for us to ensure that, at the end of the day, it’s great that the client does what it’s expecting to do, but what if something in the backend does actually happen?
For this, we actually created a pretty cool system that allows us to create a completely headless version, a completely headless client that implements CLCS. What that means is that it can take a CLCS payload and then interpret it, be able to traverse its DOM to determine if elements are existing when they’re supposed to be, and even click on buttons for us. The really nice thing about this is that we have one centralized place for all of our platforms. In the past, we had every single platform, Android, iOS, TV, web, they had to create their own end-to-end tests, set up all of this stuff to get to the right state to do it. Now it’s all happening just in one place, in the backend. If something happens downstream from us, we can point at the right place, and they’ll hopefully fix it for us.
What’s Next for CLCS?
We’re getting to the point now where we can talk about what’s actually next for CLCS. We’re going to continue to migrate our old messages. There’s a lot of those old UMA-kuns that are out there that we’re looking to transition to the new interstitial system. We’re going to experiment usage across other multi-step forms that are particularly pertinent in the customer life cycle stuff that I mentioned before. We’re going to try to replace some of our old WebView-based flows. Not only are WebViews more less elegant for our users to use, but they’re complicated for us to maintain. That means a whole nother canvas that web engineers have to go and test on.
If we’re able to eliminate those and just replace it with CLCS flows, that’d be pretty cool. We’re going to take over the entire app. No, that’s not our purpose. There’s a lot of other parts of our app that will not necessarily work well with CLCS. We’ve got our huge screen where you’re scrolling through all your different movies and TV shows and things like that that you want. There’s no way we’re going to want to render that in CLCS. That doesn’t necessarily make sense. The goal is to be hyper-specific about this particular flavor of server-driven UI. What is it really good at? Use that. Don’t try to leak it into everywhere because server-driven UI is cool.
If I Could Turn Back Time
If I were able to turn back time and go back to where we were when CLCS was first created, I’d probably try to do a few of these things. This might be helpful for you, if you’re trying to implement your own server-driven UI. I’d probably try to establish that baseline a lot earlier. It’s hard. It’s kind of a chicken and the egg, because once you start building the experiences, that’s when you realize, we actually need this other component. That’s going to push back the baseline further as you continue to explore these. If we could really sit down and try to establish what those things really are, I think that would help us a lot more in the long run, and give us a lot more runway to run with.
Also, try to formalize that testing strategy earlier. We’re at a pretty good place with our testing strategy, but it took us a while to get there. I’d probably try to work with some of our testing partners and try to figure out, how can we actually speed up this formalization so that we just build up that much more confidence in this brand-new system. Also, try to have better alignment with our design system partners. When this started, we were parallel tracked with the design system. We were basically just trying to exploit all of the wonderful work that they were doing. Now that we are much more aligned, that means that we can have much closer roadmaps together and try to evolve together, rather than each independently working separately.
We also would have been more helpful to align all of the platforms on the templates earlier. I showed you that big screen that showed all of the different platforms displaying the same UI with the same payload. It took us a while to get to that point. Before, we had a lot of if statements. We had an, “If I’m on TV, then do this one thing. If I’m on mobile, do this other thing. If I’m on a tablet, do this other thing”. Being able to align the platforms and the templates, means that we’re able to evolve those templates much more quickly and much more confidently.
Should I Adopt SDUI?
Should I adopt UI? Should any of you adopt SDUI? SDUI is super personal. It’s super specific to what you’re trying to solve. On that big spectrum of SDUI, there might be certain elements that you want to garner from it. You might not need all of the flexibility that we have on the Netflix side, so maybe you’ll go closer to the other side. You probably are using some form of SDUI, so maybe the answer is yes.
Questions and Answers
Participant 1: You have design components for SDUI. You have a separate set of design components for your native drawing, for the other non-SDUI driven, or do you reuse the same design components?
Luu: Do we use a different set of design components for SDUI versus just the regular native development?
No, we utilize the exact same components across both. The whole point of the design system was to ensure that all of our UIs, they’re adopting the Netflix branding guidelines. The whole point, it would be pretty weird if you all of a sudden got this popup and it had a completely different design element with different typography, different colors, different components altogether. We utilize the exact same components that they have.
Participant 2: From the app perspective, the CLCS makes the app more complicated in one sense, because there’s two different rendering mechanisms. There’s the native stuff, and then there’s the CLCS stuff on top. From the apps’ perspective, is it worth it? Because dealing with these popups and the flows was a pain, so they rather render stuff than trying to do five different screens in a row and all the different permutations in native code, or do the apps don’t really have a say in that, because you say, you use CLCS, and that’s it.
Luu: Is it a pain? No. We have found that it actually worked quite well for us, because a lot of the kinds of flows that we’re driving through the CLCS flows, are some of the more annoying kinds of UIs you want to develop. All of the developers that were traditionally developing those in the past were more than happy to hand that over to this new system. It’s completely outside of their interest. They want to do something that has really fun animations, that shows this thing over here and all that stuff. The developers that develop the CLCS frameworks, for them, it’s interesting because they’re building this whole new system inside of the app code base. That piece is interesting. I think everyone’s pretty happy, actually.
Participant 2: The applications that were built or finished, or the versions of the application that were finished before CLCS were implemented, they still do the old alarms. It’s only the new versions that get to use CLCS. You have to support both, the old native alarms plus CLCS on top for the new versions.
Luu: That is 100% true. We still have a lot of versions of the applications for all of our platforms, except for web, that have no semblance of any idea of what CLCS is. We still have to support those in some fashion or another. In this particular case we showed you UMA. UMA does still exist. That is a kind of payload that is still being sent by our backends as needed. It’s not being used for new messages. We have other fallbacks. There are other ways to send more simpler alerts. Not necessarily something that looks as nice as an UMA, but it’s still just a functional maybe like a JavaScript alert, or UI alert dialog, or something like that in iOS. We still have the ability to fall back on those older versions if we need to. The true story is that there’s still going to be that subset that’s always going to be a problem that we have to deal with until we can somehow completely deprecate those old versions.
Participant 3: How do you manage the navigation stack with these kinds of systems, because if I want to go back or forward or navigate through the application, how do you do that?
Luu: How do you deal with navigation stacks with this system?
We cheated because we already have a system in place for growth that is essentially a very big state machine for a user. Our UI development in the past has depended on this essential service that tells us the state of a user. If I’m in this particular state, show this kind of screen. We’re utilizing that already. Essentially, if I’m on screen five, and then the user hits back, we say, state machine, what state do I go to the user? Because the user asked to go back, and that tells you to go to state three or something like that. That, at its core, is how we deal with it. We also do have a semblance of a navigation stack in the CLCS backend itself, so we’re able to store the state of a particular user and know where they are in a particular flow. Even if that state machine wasn’t part of the process, we’re still able to figure out some semblance of where they should go. It is complicated.
Participant 3: How do you handle offline requirements?
Luu: How do you handle offline? We’re lucky in that most of these particular screens are happening in a world where the user has to be online. We’re basically asking our messaging service, do you have a message to display to this user? In this particular case, the user has to be online to get that message. If not, it’s not the end of the world. They can continue to operate offline. However, there are use cases that we’re exploring where we’re trying to figure that out, like, is there a way that we can deal with an offline connectivity state?
The true answer is that, yes, absolutely, because at the end of the day, the payload that we’re getting back to render as CLCS as a screen is just a GraphQL output, which happens to be a JSON payload. There’s no reason that we couldn’t necessarily bundle that JSON payload with the application in order to render in the fallback case where the user doesn’t have connectivity. It’s not the best case, because we’re not able to update that payload, but it still allows the user to be unblocked if it’s a particularly critical flow.
Participant 4: How do you do caching?
Luu: How do we do caching? We actually rely mostly on the different frameworks that we’re using. On our mobile clients, we’re heavily invested in using Apollo clients. The Apollo mechanism has caching part of it. A lot of our network engineers on the client side have already built our own caching mechanisms on top of them, so we’re able to utilize that caching. There are all sorts of other caches too. There’s a cache at the DGS level, or at the backend level, which is able to not necessarily always have to go back to messaging if the message was requested X amount of time ago. In general, we basically utilize most of the existing caches that we already have implemented because we already had networking implementations on all of the different platforms.
Participant 5: Have you ever encountered the issue, for example, some interfaces being so complicated, which you cannot have it. For example, you can have it in web, and you can’t have it on TV, and probably you’re going for the fallback, but there is another fallback required because some other version in some, for example, Android, doesn’t have. That you want a message to tell the user that, you need to update so you have multiple fallbacks, or how did you deal with this situation?
Luu: How do you deal with recursive fallbacks?
In general, the fallback logic that we have in place is already able to do that. If you fall back to another component that’s not necessarily baseline, it’s already going to be able to fall back on its fallback and continue recursively. That’s not particularly a problem. I do want to touch on one piece that you mentioned, though, which is, what do you do about some component that is actually more difficult to render in general, for a particular platform. There’s an interesting strategy that we’ve started to adopt now where our components don’t necessarily need to be specific Lego blocks that are Hawkins components in our design system.
They can actually be larger components that could encapsulate more interesting behavior. Something that adopts animations, or maybe is a much more complicated responsive thing that we need to render specifically for web. We wholeheartedly support those with the caveat that we know that the further that we get away from these building blocks, it means the more complicated it’s going to make maintaining those particular components that we’ve introduced or adopting them across all the different platforms.
Participant 6: Can you describe, like I’m telling a 5-year-old, how does the client ask the server for the building blocks that he wants to display? For example, does it send a request to the server saying, this is the platform name, for example, iPhone 15 Pro Max, and this is the dialog I want to display. Then the server will return what will be displayed.
Luu: How do we start this whole process, and what kind of data are we able to pass back to the server so that it knows exactly what to tell it to render?
We cheated. We utilize all of the existing stuff that was already in place for our networking backend. That includes a bunch of headers that we send for every single request that informs a lot of like, which particular device I’m on, or not necessarily which device, but which client I’m running on in my iOS or Android, what version of that I’m running, like my screen size, something like that. We already have a lot of these headers that it’s able to inform what we do on the backend side.
Because these are GraphQL endpoints, they’re just queries and mutations that we’re utilizing, we’re also able to add additional properties at any point. If there’s something specific to one particular entry point that we want to drive, some specific thing about what the user is doing at that time, maybe if they have a lower connection because they’re on cellular or something like that, we’re able to add that data as well anytime we want, for any particular entry point. We’re just able to leverage a lot of the flexibility that being GraphQL allows us.
Participant 7: If the device is in portrait and landscape, does the server send back and say the dialog box should be wider then narrower, or does the device automatically know that, and does that layout work?
Luu: Our goal was to not have to go back to the server for that, because that’s a whole nother round trip. As part of our payload, we added that responsive stack. Basically, it has children, and it does something with those children based on the amount of space available to it. In that particular case, the responsive stack might say, because I’m in landscape mode, I’m going to be able to lay this out in all sorts of manners. It’s actually a very complicated component, but that’s how we leverage it, and we’re able to use that for web as well. Because on web, if you’re resizing the window, it’s going to really suck if you’re going to have to do another request every single time that the breakpoint changes. That’s how we decided to leverage it on our end.
See more presentations with transcripts