
The Burdens of Data

Data becomes more valuable as it compounds. A constant stream of newly generated data makes the existing snapshot more useful because it allows you to see new patterns emerge that weren’t clearly visible before. Organizations realize this; hence, they have analysts working to extract valuable insights from the data they have and produce.
Although the value is clear, not everyone within an organization is equally happy to work with data. To software engineers, data is a burden. Or more accurately: state is a burden. Stateless code is easy to reason about because there are no external factors at play. External factors increase system complexity and thus make the process of writing code slower.
Managing data is a necessary evil; software engineers are burdened with the task of ensuring data is consistent, up-to-date, and available to the rest of the organization. For organizations to be able to ship quickly, it’s important to unburden engineers as much as possible. There is a simple reason for this: the more burden placed on an engineer, the slower the project they’re involved in progresses. Software needs to be maintainable to be able to ship fast. Dealing with data is a part of that.
Unburdening
This article is an exploration of how to unburden software engineers in the context of working with data in larger organizations. By making working with data easier, I believe organizations are able to ship high-quality software faster, with deeper integration between products. In the metaphor of compounding value, we change the formula to extract more value from the data already available.
Data as the Product
To a software engineer maintaining a product, data might seem like an implementation detail: a means to an end. In modern organizations however, this is not how data is treated. Data is used, aggregated, and analysed outside of the product’s business logic. Engineers should not treat data as an afterthought, instead they should be aware of the secondary uses of data.
From the perspective of a business analyst, data is never an implementation detail. Data says something about how a product is, or is not, being used. On a larger scale, trends become visible through data: it might become clear that no one uses a product in a certain way, or only a subset of the product’s functionalities is being used. A business analyst valuing this data while a software engineer sees it as an implementation detail can be a point of friction.
Software engineers want to evolve their product by adding new features. To do this, they need to be
free to change their data model how they see fit. As an example, let’s consider a feature that would
put an item on sale. To do this, a discount
column is added to the transactions table. This is
problematic, because the business analyst will have previously assumed that amount
indicated the
total volume for this transaction. With the new column, the analyst’s results will now be incorrect,
destroying business value created by the analysis in the process. The stakeholders are at odds.
The problem described above has long been solved by the idea of encapsulation. Hiding implementation details is paramount when building any sort of complex system. This comes back to the earlier point of the relation between maintainability and the speed of shipping. An engineer must be able to change their data model; without this, they would be unable to ship new features in their product. The business analyst wants to have the cake while the engineer eats it. We need to satisfy both their requirements so we can extract business value from the data while also improving our product. The solution is precisely in the principle of encapsulation: the engineer should provide the data analyst with an interface so they can use the data for their needs.
This idea adds another item to the engineer’s backlog, so let’s first convince them this is a good use of their time. First of all, the engineer is the only person in the organization who truly understands what the data means (well, the rest of their team too, hopefully)1. Secondly, the idea of encapsulation should already resonate with the engineer; they understand how it helps them—and others—reason about the code. Lastly, a typical engineer in a large organization has spent time on calls going over what exactly the data they produce means. If they create a clear interface beforehand, it will save time in the long run.
So what does this backlog item look like? It need not be complicated: a simple transformation from the internal data model to a public schema should suffice. This is the sort of transformation that code bases are full of, it will be similar—if not the same—as the transformation between the database row and an entity exposed via a REST API.
// Transformation from updated internal data model to public schema for
// consumption by analysts
type Transaction = {
amount: number;
discount?: number; // Column added in v2
};
function transformToPublicSchema(t: Transaction): Transaction {
if (t.discount) {
return { amount: t.amount - t.discount };
}
return { amount: t.amount };
}
Engineers in product teams will have to get used to the idea that the data produced by their product is as much the product as the product itself. Business value is created by both aspects. The small effort required to achieve proper encapsulation of the data model pays off in the form of ownership over the data and in turn autonomy when changing the data model. This results in faster shipping and more harmony between business units. A win-win.
Syncing over Fetching
Taking ownership of the data produced by an application also means the owner should be burdened with making sure the data is available to other parts of the organization. The last section discussed making data available to analysts, but they’re certainly not the only stakeholders in an organization who are interested in product data. Most organizations are not sharing CSVs between teams, instead they create APIs or use an event streaming platform to move data around. These methods are not as primitive as sharing CSVs, but they come with problems of their own. Obtaining data is typically a very imperative process, which requires a good amount of boilerplate code. I believe that a synchronization approach can reduce complexity by making data sharing declarative.
Data fetching is inherently an imperative process. First, you say where and how the data is obtained, then you do something with this data. Like all imperative processes, error scenarios have to be handled explicitly. Fetching from internal APIs is something that seems trivial on the surface, but when you consider the failure scenarios, a number of practical concerns arise. For example, the downstream service could be intermittently unavailable, or the network could be flaky. For these cases, retries are needed. On a higher level, there are concerns such as throttling and circuit breakers.
It’s also commonplace to use an event streaming platform such as Kafka for the purpose of transferring data. Event streaming is more resilient than on-demand fetching, as the asynchronous nature sidesteps most failures we could see when fetching. Like fetching however, streaming also comes with significant development work. Examples are: setting up topics, pushing events using the outbox pattern, and on the receiving side implementing consumers.
Events themselves also come with a number of drawbacks that are not often considered. Events by
definition are a thing that happened. This is conceptually different from entities, which are simply
something that exist. The temporal aspect of events is a small paradigm shift that engineers have to
be aware of. The addition of time adds complexity to all services integrating with events. I believe
this complexity is one of the reasons why event sourcing2 has not seen widespread adoption.
There are also more practical concerns, such as the possibility of interpreting event streams
differently between teams. Consider a simple example of two successive events: RepositoryCreated
and RepositoryDeleted
, both related to repository 123
. If team A integrates both events, the end
result will be that repository 123
does not exist. Team B might not implement an event handler for
RepositoryDeleted
, and thus the end result will be that repository 123
exists.
As opposed to fetching, synchronization is a declarative way of obtaining data. It allows an integrator to say “give me data X owned by team Y”. This is a huge productivity boost compared to imperatively fetching data when needed, or implementing consumers in an event streaming setup. Synchronization also enables reactivity. Downstream services consuming data can automatically be kept up-to-date: when a user request comes in it can be satisfied instantly as the data is already available.
The remaining question is what a synchronization abstraction should look like. Relatively recently, synchronization has seen use in the context of frontend applications3. In those frameworks the abstraction is a synchronization engine, a centralized server that pushes data from a global changelog to connected clients. The data is integrated directly into the client’s local database. To be useful to organizations, the synchronization engine should act as a control-plane, as there is no single server containing all the organization’s data. Clients would connect to the control-plane and broadcast what data they are interested in. The engine would make sure the data is pushed to the clients. The sink of the data would be the service or product’s database, as a result the integrating team can query the synced data as if it were their own. To be easily adoptable the sync engine should work using existing infrastructure. In practice this could simply mean synchronizing between two Postgres databases using Kafka as event streaming platform.
Synchronization enables a renewed focus on the actual data. The declarative approach hides the how and lets teams focus on the what. The mundane implementation details of fetching or streaming are no longer relevant as that is abstracted in the sync engine. The problems associated with having to define events is also removed, this means we can reason about entities without considering the temporal aspect. In short, synchronization greatly reduces the burden of data sharing within organizations.
Product Integration
As organizations grow and the development of products becomes increasingly distributed, the cohesion between products gets harder to maintain. For the best customer experience, sharing and integrating data between products is key. A simple example is an email client showing calendar data when the user gets invited to an event. High cohesion binds users to your product by keeping them in your ecosystem. Engineers are burdened with the task of maintaining this integration.
To find integration opportunities between products, an engineer needs to be aware of the data that other teams are producing. That means the burden we want to place upon the engineer is the burden of knowing about absolutely all data within the organization. Clearly, this is a pipe dream.
The solution is actually very attainable, earlier in this post the idea of a data schema related to products was already proposed, the next logical step is to publish these schemas so teams can easily find data that might be relevant to their product. A data catalog is not a new idea at all, most organizations already have this as part of their data warehouse. The evolution (albeit small) of this idea is to think about this catalog on a “database row” level instead of on an analytical level: operational data vs. analytical data4.
Consolidating
The concept of “data as a product” and the necessity of defining an interface naturally leads to the idea of a data catalog. Without a catalog no one would be able to find data to interface with in the first place. The ideas discussed in the section on synchronization can only work if there is a data catalog. After all, to be declarative about obtaining data, what we want to obtain should be defined somewhere. This catalog becomes the foundation that enables both the discovery of data and the synchronization mechanisms we’ve discussed.
Practical Applications
My belief is that, when implemented, the ideas discussed in this essay will not only enable organizations to ship software faster but also enable new use cases. I already briefly mentioned frontend frameworks that make use of synchronization. These frameworks enable “local-first” software through synchronization. This has a number of advantages compared to traditional server/client-model software: instant (optimistic) updates, out of the box multi-user live collaboration, and offline support. A sync engine within an organization enables back-office applications to be built with precisely these features.
The same advantages can also be applied to backend services. A good example use case is the “backend-for-frontend” (BFF) architecture pattern. In a synchronization paradigm, the BFF is kept up-to-date automatically by the downstream services. The BFF simply integrates data derived from the catalog, this way the provenance is preserved and meaning of the data stays clear. But most of all, the requests from the user can be satisfied immediately.
In the age of AI agents, having an operational data catalog is very powerful. LLMs are all about context, and data schemas are a great way to provide context, especially when related to products. Describing schemas does not require a lot of tokens when compared to actual data. The agents can then request data as needed. Additionally, a synchronization engine can be used to continuously push data to the agent’s database. This way the agent doesn’t have to waste cycles on function calling and fetching. Instead the agent can query an always up-to-date local data store, this reduces complexity.
Conclusion
Being declarative about obtaining data is a powerful force multiplier when it comes to software development. I cannot stress this enough. Within organizations it is hard to align backlogs so multiple teams can work towards the same goal. By making publishing data a small effort, the burden of integration is moved solely to the integrator. This sidesteps all the alignment work.
On a technical note, the fact that data can be synchronized directly to a team/product’s database means that the integrating team only has to adjust their database queries to join the newly available data. This is end-to-end declarative data integration between products! This way the code can be almost purely business logic.
Compound interest is a powerful force, by reinvesting in data infrastructure, the value derived from data increases exponentially. At the same time, software engineers are unburdened and can focus on what really matters: shipping features.
Related Work
The idea of synchronization has been around for as long as multiplayer games, but only relatively recently has this been applied to frontend software. Small iterations on existing ideas are how technology evolves. This essay’s small idea is using synchronization to make software engineering (the craft) within organizations more efficient. As such, this essay does not stand on its own, these are some sources that inspired this essay:
- When mentioning local-first software, Ink & Switch’s article on the subject must be mentioned.
- Some of the ideas discussed in this essay are (loosely) based on Martin Kleppmann’s book, “Designing Data-Intensive Applications”.
- My favorite players in the local-first space are Instant, Jazz, LiveStore, and Zero.
- Inspiration around synchronization came from Electric, PowerSync, and SQLSync.
Let’s Talk
I believe that a constant reflection on the software engineering process is necessary, after all, software is eating the world (or maybe AI agents are). If you (dis)like the ideas in this essay, or are experiencing hardships related to data in your organization, I’d be happy to discuss and help out! Email me here or find me on X.
Thanks for reading!
Footnotes
-
The article “Documenting your database schema” by the CTO of Mercury resonates with the idea of having the product team (data owner) documenting what their data means. ↩
-
In event sourcing, all changes to an entity are stored as a sequence of events, the current state is derived from those events. ↩
-
Instant and Jazz are examples of frontend frameworks making use of synchronization. ↩
-
The difference between operational data and analytical data can be understood through the level of aggregation. The former is typically on the level of a single customer, while the latter is aggregated. ↩