Tech Journal

How we built a frontend app for WSO2 Identity Server with React

WSO2 Identity Server has always been known for its technical brilliance and feature richness. But that wasn’t enough to satiate us. In order to provide an unparalleled user experience, we wanted to add one more thing to our quiver. Thus, we introduce our brand new Console app!

The beta version of our Console app is available with the 5.11.0 version of WSO2 Identity Server. This app provides a vastly improved user experience, allowing both administrators and developers to carry out their tasks through an intuitive and carefully-crafted user interface. 

Throughout the release cycle, the app underwent many iterations of designs and redesigns as our team brainstormed different ideas and experimented with different implementations. This is the Console app’s origin story.

The early days

Soon after wrapping up work on our User Portal app, which will be known as the My Account app from 5.11.0 onwards, we hit the ground running with work on the Console app. When we started, we were sure of one thing—that is, just like our My Account app, it was going to be written in React.

React’s virtual DOM, which makes state updates faster and efficient, superior developer experience, and, of course, JSX made it an automatic choice for us. 

We settled on Semantic UI to build our UIs since it allowed us to use the theme we created outside our React apps too. This allowed us to use the same theme across both our React apps and our authentication portals, which were written using JSP.  

Using a Mono Repo

Since our My Account app also uses React and the Semantic UI framework, we wanted to reuse some of the utilities and components in the Console app. However, since we continued to work on improving the codebase of My Account, we needed a way to modify the reusable code while continuing to use it in the Console app.

We could have moved the reusable code to a separate repository and have it published to the NPM registry, and then add it as a dependency to our Console app. But this would have severely slowed down our development workflow. Imagine, if we were to modify the reusable code, we would have had to modify the code, test it, have it published, and then we would have had to bump the version of the dependency in the Console app to fetch the new changes.

Even then, there was no guarantee that the changes would work as intended in the apps. The best way to make sure the reusable code does what it is supposed to is to test it in the apps themselves. 

Here is where the concept of a mono repo came to our rescue. A mono repo allows you to have multiple packages within the same repository and allows using one package as a dependency in another. This fit our bill perfectly and after cursory research online, we settled for Lerna, a stable and popular mono repo tool.

So, now, we have both the My Account and Console app code in the same repository. The reusable React components have been turned into a component library and exist in our module directory in the same repo. The re-usable TypeScript code has been bundled into the Core module.

Consequently, what you have now is two different frontend apps sharing code within the same repo. This makes maintenance a breeze, improves development workflow, and makes our code more manageable. Stay blessed, mono repos!

To Redux or Not to Redux

One question that we consistently kept asking ourselves, and struggled to answer, was whether or not we needed Redux. Of course, our muscle memory would tell us to install Redux as soon as we bootstrap a React project. But, do we really need Redux? 

The answer to this question depends on if you need a global single source of truth. Every React component has its source of truth in the form of states, and props can be used to pass data down the component tree. By cleverly designing the component tree, or thinking in React (as the React guys love to call it) you can reduce the need to have a global single source of truth.

More often than not, components make API calls to persist their data in the backend and the backend data will serve as the single source of truth you need. Unless there is a pressing need to reduce the number of API calls or you have to transform the returned before using, this method would suffice. 

However, there could be scenarios where you would want all the components to have access to particular data. In such a case, it will be overkill to pass the data as props to all the components. Here, there is a strong case for using a library like Redux. But there is more to ponder.

React’s new Context API allows any component to subscribe to a data, thereby allowing components to access data from one source without props. This is similar to what Redux offers. So, which one do you choose?

Initially, we managed to steer clear of the need for a global single source of truth. However, we soon realized that we needed to access user profile information globally. Since the use case was simple, we deliberated used the Context API. However, our needs started getting more complex. We also needed to store runtime configuration details (more on this later) and localization information. 

As we were struggling to zero down on one of the two, we also happened upon React-Redux’s performance issues after migrating to the Context API and a React developer’s advice against seeing the Context API as a replacement for Flux-like architectures

And finally came the clincher. When we wanted to access the single source of truth outside React components/JSX files. Redux provides an API to allow us to access the state outside React whereas it is impossible with the Context API. So, we plowed ahead with Redux.

Functions or Classes?

With hooks, React made functional components as powerful as the class-based ones, and soon we had to decide on which one we were going to use. I personally loved the assortment of lifecycle methods class-based components offered and had misgivings about hooks’ ability to replace the functionalities of a class-based component. 

However, we wanted to escape the Higher-Order Component Wrapper Hell in class-based components, and React deprecating some of the lifecycle methods implied that functional components may be the future. So, we decided to go ahead with functional components.

Initially, we found having to use the `useEffect` hook instead of the lifecycle methods stifling, but with time, admittedly, functional components started growing on us. Once we got the hang of hooks, we found functions to be a lot more simplistic and straightforward than classes. The migration was a daunting task at the beginning, but once you start thinking in terms of functional components, rest assured, you will fall for it. 

This is not to mean functional components are not without their shortcomings as we soon learned when we wanted error boundaries. Error boundaries make sure an error in a component doesn’t break the whole app. Instead, you can handle the error within the component and show a fallback UI. But the problem was that error boundaries are not possible in functional components.

So, we decided to wrap functional components on a given page with a class-based component so that, at the most, it is a page that would break.

Lazy Loading

As our app neared maturity, we realized that the final bundle was several megabytes in size. Such a big script file will cause a considerable delay in loading the app, negatively impacting the user experience of the app. We wanted to cut down the size of the bundle to allow faster loading.

Of course, we did the obvious thing—we analyzed our dependencies and tried replacing larger ones with smaller alternatives. But that didn’t show any significant improvement in the bundle size. We soon made peace with the fact that our app was huge and our bundle was going to be large as a result. 

So, we had to split our code into multiple chunks. Dynamic imports help us by creating chunks of our code and making sure they are loaded only when the user requires them. So, when the user loads the app, only the code needed to render the landing page is loaded. As the user navigates through the app, the required chunks can be dynamically loaded. This addresses the problem of the app taking a long time to load.

React goes a step further and helps us to render a dynamic import as a regular component with the React Lazy feature. All we needed to do was pass a function that would return a dynamic import as an argument into the React.lazy() method. 

But this introduces a new issue. Imagine you are lazy-loading a component. The app will have to wait till the required code chunk is loaded before the component can be rendered. Until then, you are not going to see anything in the component’s place. This will impact the user experience. Ideally, we should show a loader until the component is loaded. How can we do it?

Don’t worry, React has (once again) got your back. The lazy-loaded component is actually supposed to be rendered inside a Suspense component. The fallback prop of the Suspense component accepts a fallback component that will be rendered until the lazy-loaded component is ready. 

Using React’s lazy loading allowed our app to show tremendous improvement in the load time and this is a must-use in all large apps.

Runtime Configuration

We wanted our app to be configurable. No, we didn’t want our users to dig through the code just to change the branding. We wanted a code-free way of doing it. And it should be done during runtime. 

So, we created a JSON file where all the deployment configuration details will be stored. The app dispatches a GET request during initialization to load this file. Once the response is received, we parse this JSON file and store the resulting object in a global variable. 

Rebrand apps on the fly

The app would access this global variable to load the relevant configuration. For instance, the name of the app is obtained from this configuration. So, now you know what should be done to change the name of the app. Change the name in the JSON file, reload the app, and voila, the name in the header changes too!

Supporting Internet Explorer

Some of our customers continue to use Microsoft Internet Explorer and Microsoft Edge Legacy and it is important that we make sure that our app runs fine in these browsers. Since some of the native APIs that work in most of the other browsers don’t work in Microsoft’s legacy browsers, we had our work cut out. 

We initially tried polyfilling APIs that are not supported by Internet Explorer but we quickly found that it was almost impossible to manually polyfill all the missing browser APIs. And we also had issues with some of the CSS not rendering properly in Microsoft Internet Explorer. So, we had to look for an alternate solution. 

Enter Babel. Switching from ts-loader to babel-loader allowed us to transpile our code to run on Internet Explorer. In addition, we configured babel to use core-js to polyfill APIs that don’t work in Explorer. 

But this doesn’t fix the CSS issues. So, we used the autoprefixer package along with the postcss-loader for webpack to transpile CSS to render properly in Internet Explorer. 

However, polyfilling can end up increasing the size of the final bundle. You will always have to make a tradeoff between supporting different browsers and making sure this doesn’t affect the app’s performance in the most used browsers. As to where you draw the line and say we are not going to support these browsers is up to developers to decide.

But how do you tell Babel and autoprefixer that these are the browsers you intend to support? Well, the browserslist attribute in the package.json file allows us to convey this information in a very simple way. You can mention the names of the browsers you want to support in an array, or like what we have done, you can decide to support browsers-based on their market share. We have set it to greater than 0.2%, which means our app will run on browsers that occupy a market share greater than 0.2%. 

Wrapping up

The journey isn’t certainly over. After all, we have only released the beta version of the app. As we constantly strive to improve all the apps and add new features, we are certain that there will be many more adventures to be had and lessons to be learned. As of now, our Console app is gearing up to go through another release cycle of refinement and enrichment.

 

Theviyanthan Krishnamohan

Tech geek, cricket fan, failing 'writer', attempted coder, and politically incorrect.

Recent Posts

Multitask Knowledge Transfer in Genetic Algorithms

The previous article discussed transferring knowledge from one completed problem to a new problem. However,…

8 months ago

A primer on Memetic Automatons

Memetic automatons leverage knowledge gained from solving previous problems to solve new optimization problems faster…

9 months ago

Data-driven meme selection

This article discusses how we can use data to automatically select the best meme from…

10 months ago

An introduction to memetic computation

Memetic computation is an extension of evolutionary computation that combines the use of memes and…

11 months ago

An introduction to evolutionary algorithms for dummies

Evolutionary algorithms are heuristic search and optimization algorithms that imitate the natural evolutionary process. This…

12 months ago

Running multiple containers within virtual hosts in Mininet

Mininet is a popular network emulator that allows us to create virtual networks using virtual…

1 year ago