The pMD Blog

Welcome to the
pMD Blog...

where we cover interesting and relevant news, insights, events, and more related to the health care industry and pMD. Most importantly, this blog is a fun, engaging way to learn about developments in an ever-changing field that is heavily influenced by technology.

Weekly Byte: Getting insight from "Pretty Big Data"
pMD generates a firehose worth of data around the clock from our customers. It's not petabytes of Big Data, but I like to call it gigabytes of Pretty Big Data. Everything from patient clinical data coming in real time from providers, to billers processing charges, to countless external information exchanges with hospital and software systems around the country. To help our customers make sense of this data, we scrutinize all of our UI features and changes to hide anything that's not essential for someone to do their job easily and effectively.

Yet there are times that a practice needs to step back and see the forest for the trees. For a long time we've had a suite of reports that aggregate visit information across longer time scales, such as the charge and visit count reports to some pretty sophisticated auditing tools for different specialties. As our customers have become more and more data savvy in the new health care landscape, they've requested more insight into their productivity and efficiency. Sometimes these queries can very reasonably span years worth of data. From a technical perspective, this presents some challenges in scalability and processing.

One of my current projects is designing and implementing a new flow for these data intensive reports that can scale with the user's data while also providing a solid foundation for more large-scale reports. There are two basic challenges in building reporting infrastructure:

1. Making it scaleable
2. Decoupling the business logic from the data as much as possible

To address the scalability, some standard practices are being used. Heavily indexed tables, intelligent batching and throttling of work, and making the processing asynchronous to user requests. This last attribute will allow for long running reports to be scheduled on our job queue based on priority and load. The final delivery of the reports will then leverage our secure messaging app itself, which happens to make the report content highly secure.

On the second challenge, I've written about business logic and simplicity before, but it's always a struggle in keeping human affairs and constraints simple in the purely logical world of software. While working on this new infrastructure, I've found myself constantly refactoring codes as the correct abstractions become more clear. One of the tell-tale signs that you need better abstractions is when you realize your business logic is becoming complex, hard to test, and hard to remember the next day. When this happens, one key approach is to question your choice of data structures. Often times a map, list, multi-dimensional array, or queue are what stands between you and a decoupled, maintainable, and dare I say, beautiful business layer. In my case, the big insight was realizing that "flattening" our log data into something more actionable by the logic layer avoids convoluting how the data is stored with how it should be processed.

Reporting tools don't often get the credit for being the sexiest part of an application. After this project, I can think of few things else being more sexy than building elegant and extensible tools while also helping provide clarity and perspective to our customers.