It's easy to write a feature that significantly slows down what was once a snappy application. This is in fact all too common in the software industry. We've all experienced the first version of an app that is often fast, but a few years later, after 100s of new features, the system has slowed to a crawl. Typically as consumers, we take faith in Moore's law and simply buy newer and faster phones and personal computers just to keep up. SaaS providers can often times employ the same strategy with their servers by regularly upgrading hardware. Sometimes this makes economic sense, but often times it does not. Instead, it is usually best to try to mitigate the performance hit or simply avoid it with better code.
One interesting strategy that’s been on my mind recently is how to distribute code complexity between the server and clients. The server can often times handle an immense load but can also lead to centralized bottlenecks that become much more difficult to build around and maintain. Finding the right components of a feature that can be implemented on the client side can sometimes bring about immense gains, although not without some trade offs. In terms of maintainability, it is often considered that building server code provides faster iteration times. Because after all, we control the server while all upgrades to native clients like iOS and Android are governed not only by the end user but the respective companies (Apple and Google) as well. This can sometimes provide a huge incentive to keep the client software 'dumb' and merely a consumer and displayer of information.
In one of our recent projects, we’re planning on distributing some of the computational needs of a protocol to the web client while utilizing some modern frameworks. The result will attempt to capture the current sweet spot of where an application should live, and perhaps lead to both ‘smart’ clients and servers. Yet as the last few years have shown, this balance is always shifting and needs to be constantly reavaluated.