Tag Archives: mobile web

Flipkart Lite — The how?

To know what Flipkart Lite is, read our previous article on the story behind building the Progressive Web App: A New Way to Experience Mobile

The tech behind the application

Well, where do I even start? The following is the list of most of the tech behind Flipkart Lite in NO particular order. Many thanks to all the authors and contributors of these tools, libraries, frameworks, specification etc…

Front-end:

Tools — Build and Serve:

Web platform:

And a few more that I’m probably missing.

Some of the things listed above come with the tag “Experimental Technology”. But, trust me, take a leap of faith, play around, implement it in your app and push it to production. You’ll definitely feel proud.

The Monolith

The architecture decision started with choosing between “One big monolith” and “Split the app into Multiple Projects”. In our previous projects we had one big monolith repository and the experience getting one’s code to master wasn’t pleasant for many of the developers. The problem having all developers contributing to one single repository is — facilitating them to push their code to production quickly. New features flow in every day and the deployments kept slowing down. Every team wants to get its feature into master and effectively into production ASAP. The quintessential requirement of every single person pushing his/her code to his/her branch is that it should get reviewed and merged in the next hour, pushed to a pre-production like environment in the next 5 minutes and ultimately production on the same day. But how can you build such a complicated app from scratch right from the day 1.

We started with a simple system and added complexity

Separate concerns

The goal was to have the following DX(Developer Experience) properties,

  • To develop a feature for a part of the application, the developer needs to checkout only the code for that particular application
  • One can develop and test one’s changes locally without any dependencies on other apps
  • Once the changes are merged to master, these changes should be deployed to production without depending/waiting on other applications

So we planned to separate our app into different projects with each app representing a product flow — ex: Home-Browse-Product or the pre-checkout app, Checkout app, Accounts app, etc… but also not sacrificing on some of the common things that they can share with each other. This felt like restricting usage in technology. But everyone agreed to use React, Phrontend and webpack :). So it became easy to add simple scripts into each of the project and automate the build and deploy cycles.

Out of these different apps, one of them is the primary one — the Home-Browse-Product app. Since the user enters Flipkart Lite through one of the pages in this app, let’s call it the “main” app. I’ll be talking only about the main app in this article as the other apps use a subset of tooling and configuration the same as in the “main” app.

Home-Browse-Product

The “main” app takes care of 5 different entities. Since we use webpack, each of the entities just became a webpack configuration.

  • vendors.config.js : react, react-router, phrontend, etc…
  • client.config.js : client.js → routes.js → root component
  • server.config.js : server.js → routes.js → root component
  • hbs.config.js : hbs-loader → hbs bundle → <shell>.hbs
  • sw.config.js : sw.js + build props from client and vendor

vendors.config.js

This creates a bundle of React, react-router, phrontend and a few other utilities.

{
  entry: {
    react: "react",
    "react-dom": "react-dom",
    "react-router": "react-router",
    phrontend: "phrontend",
    ...
  },
  output: {
    library: "[name]",
    libraryTarget: "this",
    ...
  },
  plugins: [
    new webpack.optimize.CommonsChunkPlugin({ name: "react" }),
    new CombineChunksPlugin({ prelude: "react" })
  ]
}

This is roughly the configuration for the vendors js. We tried different approaches and this worked for us. So the concept is to

  • specify different entry points
  • use CommonsChunkPlugin to move the runtime to this chunk
  • use CombineChunksPlugin (I’ve uploaded the source here) to combine all the chunks to one single file with the runtime chunk at the top

As a result, we get one single file — vendors.<hash>.js, which we then sync to our CDN servers.

The vendors list is passed down the build pipeline to other apps as well and each of them (including main) externalise the vendors so as to be used from vendors.bundle.js,

{
  externals: vendors
}

client.config.js

As the name says, it is the configuration to build the app. Here we do minification and extraction of css into a single file.

server.config.js

Why do we even need a server.config.js ? Why do we need to bundle something for server?

We use webpack heavily and we rely on some conventions — we add custom paths to module resolution, import css files, use ES6 code, and node cannot understand most of this natively. Also, we don’t want to bundle every single dependency into one single file and run it with node. Webpack provides a way to externalise those dependencies leaving the requires for those externals untouched and this is one way of doing it,

var nodeModules = {};
fs.readdirSync(‘node_modules’).filter(function(m) {
  return [‘.bin’].indexOf(m) === -1;
}).forEach(function(m) {
  nodeModules[m] = ‘commonjs2 ‘ + m;
});

and in the webpack configuration,

{
  output: {
    libraryTarget: "commonjs2",
    ...
  },
  externals: nodeModules,
  target: "node"
}

sw.config.js

The sw.config.js bundles sw-toolbox, our service-worker code and the Build versions from vendors.config.js and client.config.js. When a new app version is released, since we use [hash]es in the file names, the URL to the resource changes and it reflects as a new sw.bundle.js. Since we now have a byte diff in the sw.bundle.js, the new service worker would kick in on the client and update the app and this will work from the next Refresh.

hbs.config.js

We use two levels of templating before sending some markup to the user. The first one is rendered during the build time hbshbs. The second one is rendered during runtime hbshtml. Before proceeding further into this, I’d like to talk about HTML Page Shells and how we built them with react and react-router.

HTML Page Shells

Detailed notes on what HTML Page shells or Application Shells are are given here — https://developers.google.com/web/updates/2015/11/app-shell . This is how one of our page shells look like —

The left image shows the “pre-data” state — the shell, and the right one is the “post-data” state — the shell + content. One of the main things this helps us in achieving is this —

          Perceived > Actual

It’s been a thing for quite some time that what the user perceives is the most important in UX. For example, splashscreen — it informs that something is loading and gives the user some kind of progress. Displaying a blank screen is no use to the user at all.

So I want to generate some app shells. I have a react application and I’m using react-router. How do I get the shells ?

Gotchas ? maybe.

We did try some stuff and I’m going to share what worked for us.

componentDidMount

This is a lifecycle hook provided by React that runs ONLY on the Client. So on the server, the render method for the component is called but componentDidMount is NOT invoked. So, we place all our API calling Flux actions inside this and construct our render methods for all our top level components carefully such that once it renders, it gives out the Shell instead of throwing an empty container.

Parameterised Routes

We decided that we would create shells for every path and not a simple generic one that you can use for anything. We found all the paths that the application used and would use. We wrote a small script that iterated through all the routes we defined to react-router and we had this —

/:slug/p/:itemid
/(.*)/pr
/search
/accounts/(.*)

During build time, how can you navigate to a route (so as to generate HTML Page Shells for that route), with an expression in the route that is resolved only during run time ?

            Hackity Hack Hack Hack

React-router provides a utility function that allows you to inject values to the params in the URI. So it was easy for us to simply hack it and just inject all the possible params in the URIs.

function convertParams(p) {
  return PathUtils.injectParams(p, {
    splat: 'splat',
    slug: 'slug',
    itemId: 'itemId'
  });
}

Note: We used react-router 0.13. The APIs might have changed in later versions. But something similar would be used by react-router.

And now we get this route table.

Route Defined     Route To RenderPageShell
/:slug/p/:itemid  → /slug/p/itemId     → product
/(.*)/pr          → /splat/pr          → browse
/search           → /search            → search
/accounts/(.*)    → /accounts/splat    → accounts

It simply works because the shell we are generating does NOT contain any content. It is the same for all similar pages (say product page).

Two-level templating

We are back to hbs.config.js. So we have one single hbs file — index.js with the following content.

// this is where the build time renderedApp shell goes in
// React.renderToString output
<div id="root">{{content}}</div>

// and some stuff for second level 
// notice the escape <script nonce="\{{nonce}}"> <script src="\{{vendorjs}}"></script> <script src="\{{appjs}}"></script>

Build time rendering: For each type of route, we generate a shell with $content injected into index.hbs and get the hbs for that particular route, example — product.hbs

Runtime rendering: We insert all the nonce, bundle versions and other small variables into the corresponding app shell hbs and render it to the client.

The good thing is that the user gets a rendered html content before static resources load, parse and execute. And this will be faster than server-side rendering, as this is as good as serving a static HTML file. The only variables in the template are a few numbers that are fetched by in-memory access. This improves the response-time and the time to first paint.

The end

And with all this and a lot of gotchas that I missed out, we put Flipkart Lite into production successfully. Oh wait! This didn’t start as a story. Anyway. Thanks for reading till here.

All the solutions described above are NOT necessarily the best solutions out there. It was one of our first attempts at solving them. It did work for us and I’m happy to share it with you. If you find improvements, please do share it :).

– by Boopati Rajaa, Flipkart Web Team

Progressive Web App: A New Way to Experience Mobile

There have been a few turning points in the history of the web platform that radically changed how web apps were built, deployed and experienced. Ajax was one such pivot that led to a profound shift in web engineering. It allowed web applications to be responsive enough to challenge the conventional desktop apps. However on mobile, the experience was defined by native apps and web apps hardly came close to them, at least until now.  Mobile Engineering team at Flipkart discovered that with right set of capabilities in a browser, a mobile web app can be as performant as a native app.

Thanks to  Extensible Web Manifesto’s efforts to tighten the feedback loop between the editors of web standards and web developers,  browser vendors started introducing new low-level APIs based on the feedback from developers. The advent of these APIs brings unprecedented capabilities to the web. We, at Flipkart, decided to live on this bleeding edge and build a truly powerful and technically advanced web app while working to further evolve these APIs.  

Here’s a sneak peek into how we’ve created an extremely immersive, engaging and high-performance app.

Immersive : While native apps are rich in experience, they do come with a price of an install. While web apps solved the instant access problem, the network connectivity still plays a significant role in defining the web experience. There have been multiple attempts at enabling offline web apps in the past, such as AppCache and using LocalStorage/ IndexedDB. However, these solutions failed to model complex offline use cases described below, making it painful to develop and debug issues. Service Workers replace these approaches by providing a scriptable network proxy in the browser that allows you to handle the requests programmatically. With Service Workers, we can intercept every network request and serve a response from cache even when the user is offline.  

Image1Image2

We chose to use SW-Toolbox, a Service Workers wrapper library that enables using simple patterns such as NetworkFirst, CacheFirst or NetworkOnly. SW-Toolbox provides an LRU cache used in our app for storing previous search results on the browse page and last few visited product pages. The toolbox also has TTL-based cache invalidation mechanism that we use to purge out of date content. Service Workers provides low-level scriptable primitives that make this possible.

Screen Shot 2015-11-09 at 11.28.18 PM

Making the right solution work was as hard as devising it. We faced a wide spectrum of challenges from implementation issues to dev tooling bugs. We are actively collaborating with browser vendors to resolve these challenges.

One such significant challenge that emerged from our use of Service Workers was to build a “kill switch”. It is easy to end up with bugs in Service Workers and stale responses. Having a reliable mechanism to purge all caches has helped us to be proactively ready for any contingencies or surprises.

One more cornerstone of a truly immersive experience is a fullscreen, standalone experience launched right from the home screen. This is what the Add to Home Screen (ATHS) prompt allows us to do. When the user chooses to add to home screen, the browser creates a high-quality icon on the home screen based on the metadata in the Web Manifest. The ATHS prompt is shown automatically to the user based on a heuristic that is specific to each browser. On Chrome, if the user has visited the site twice within a defined period, the prompt will trigger. In the newer Chrome versions, we receive an event once we have matched the heuristic and can show the prompt at a later point in time.

While the heuristic is indispensable to prevent spam on the web platform, we felt it was too conservative and convinced the Chrome team to tweak the heuristic for more commonly occurring scenarios. Based on our feedback, experiments are underway by the Chrome team to shorten the required delay between interactions.

Native apps use splash screen to hide the slow loading of home screen. Web never had this luxury and there was a blank page staring at the user before home screen could load up. Good news is the latest version of Chrome supports generation of a splash screen that radically improves the launch experience and perceived performance of the web app.

splashscreen

Another capability we’re championing is opening external links in the standalone app version rather than in a browser tab. Currently, there is a limitation with Android, but we are working with the Chrome team to enable this use case as soon as possible.  

Engaging: Being able to Re-engage with our users on the web has always been a challenge. With the introduction of the Web Push API, we now have the capability to send Push Notifications to our users, even when the browser is closed. This is possible because of Service Workers, that live beyond the lifetime of the browser.

Notifications

High Performance: A highly performant mobile app is the one that requests less data over the network and takes less time to render.  With a powerful scriptable proxy and persistent cache living in the browser, the data consumption from the network can be reduced significantly. This also helped in reducing the dependency on the network strength and eliminating all latencies on a repeat visit.

Rendering Performance has always has been a challenge for the web. We identified significant improvements in performance when GPU handled rasterization compared to CPU doing it. Hence we decided to leverage GPU rasterization on Chrome (Project Ganesh,  by including the required meta tag in our HTML). At the same time, we have carefully balanced the right number of GPU accelerated composited layers by measuring composition vs. paint costs. Thirdly, we’re using GPU friendly animations namely Opacity and Transform transitions.

Profiling on various mobile devices using Chrome Dev Tools Timeline panel and Chrome Tracing, helped us identify multiple bottlenecks and optimization paths. This helped us make the best of each frame during an animation. We are continuously striving to achieve 60fps animations and interactions. We use the RAIL model for our performance budgets and strive to match and exceed expectations on each of the metrics.

All of this put together, manifested into a stellar experience for our users. It’s been a remarkable journey building this web app, working with browser vendors and pushing the limits of web platform on mobile. Over the coming weeks, we plan to roll out more detailed posts that deep-dive into the technical architectures, patterns and most importantly the lessons learned.

We believe more browser companies and developers will start thinking in these lines and make web apps even better. The web is truly what you make of it, and we have only just begun.

#FlipkartLite

#WeBuildAwesome

Last but not the least,  meet the Flipkart Lite Team  that did the magic— Abhinav Rastogi, Aditya Punjani, Boopathi Raja, Jai Santhosh, Abinash Mohapatra, Nagaraju Epuri, Karan Peri, Bharat KS, Baisampayan Saha, Mohammed Bilal, Ayesha Rana, Suvonil Chatterjee, Akshay Rajwade.

2015-11-05

(wish everyone was in the pic)