Software designed to help emerging VC and PE firms growFIND OUT MORE
GPs are now able to capture more data than ever before. But if that collection process isn’t guided by a comprehensive data strategy, that opportunity can turn into a serious roadblock – especially as GPs try to scale. Travis Broad, Manager, at Lionpoint Group, shares some guidance on how to go about building a data strategy that will help you avoid those roadblocks, and how to create a tech stack that will allow that data strategy to flourish.
Travis Broad: Standardization is key. I think of this from a bottom-up perspective: when you’re thinking about the investment side of a portfolio company all the way through to the fund and accounting information, standardization is key. You have to be able to identify the singular data model which is going to sit across the entire organization.
Until this point in time, people have gotten away with merging different data sets from different companies in different sectors and organizations, due to the fact that they’ve all had their own Excel files. But as people have started moving into technology over the last 10-15 years, they’ve started realizing how spotty or incomplete their data sets are – and have recognized the obvious power in standardization.
So you have to establish what that data dictionary looks like for you. What do you look for, what are your consistent metrics, and are they consistent across the entire company? If you were to sit there and analyze your investments across the entire portfolio, do you have the same terminology for sector, for example, across all these different investments?
There’s also been an increase in technology’s ability to help with that. So when we’re watching people try to consolidate these data sets, they need the technology to be able to consolidate the data set into. Because of that, I think whenever GPs begin down the path of a technology implementation they start looking at that whole system architecture and how they can fit that into what they’re trying to do for their data strategy.
This is probably where some people trip up, because to sit there and try to look at enabling a data strategy can be quite imposing, especially for a smaller organization that might not have the capabilities to be able to do it. In reality, though, it can be done iteratively.
There’s no need to renovate an entire organization’s data flow all at once. People can adopt aspects of tech individually – they can start with an area of business that needs the most work, renovate it, get it up and running, and then move onto the next sector. People can focus on what they think are the quick wins and then they can start building out that data strategy over time instead of all at once. For a smaller organization that doesn’t have the manpower to do something like that in one big bang, they’d benefit from being able to target the quick wins and move on through the rest of it.
TB: When you are incorporating a number of different systems in a tech stack, there’s a risk of having duplicate records or duplicate types of metrics across these systems. You have to identify the data model where you know where every single type of item is mastered.
If you have an effective data model, you know exactly where every piece of data comes from, you know exactly who uses it, through which reports, through which exposure outcomes, and you know exactly how it’s going to be impacted if you ever change it. So that data model is key when you try to identify what you’re really trying to do and how to reduce risk.
But that’s also where a lot of organizations face issues if they work with multiple individual solutions. There’s always going to be an integration risk, there’s always going to be an aspect of risk when you are integrating multiple systems into one. With that risk, with multiple different systems in a tech stack – all unique, all with their own workflows – even with a centrally managed data model you’re still going to have the situations where the sync times might not line up, where you might not have the data available for certain reporting models when you need it.
TB: First of all, an integrated system takes away the pressure of having any in-house built integrations. I don’t think enough organizations realize how challenging it is to really build out those integrations themselves. Typically, that requires a dedicated IT function to not only scope and develop but maintain going forward. To have an effective data model, it has to be one that is evolving as you go forward, so if you don’t have any evolving IT function that’s able to incorporate those changes into an integration set, it’s not going to be able to be effectively managed.
When you take away the pressure of having in-house built integration, you take away months of development and costs. But even with a consistent data model, the data structure and requirements can be quite complex. And with an industry that is quickly growing, with this growth in the market and the amount of capital that has to be deployed, it just becomes increasingly more burdensome to have to get multiple systems to talk to each other. So the integration risk is that the data problems can actually disrupt principal operations.
Imagine if you’re going through a fundraising problem and you’re getting incorrect data through or your systems aren’t talking to each other. That will have an actual impact on the way that people are able to do their operations. With an integrated system, you’re not looking at is as multiple systems, you’re simply seeing it as your business function – as part of your business operations.
That will help also with the adoption, because people won’t need to learn different systems or capabilities. They’ll be able to log in, see the required function, and then trigger tasks and notifications down the workflow. An entire business operation can flow onto one system, in reality. Additionally, once you get to that point, the integration time to get things up and running with one system is minimized because you no longer have to worry about trying to connect different systems – it’s just the process of implementing one.
TB: I would say that for smaller organizations who are new to it, who are maybe currently managing in Excel but know they need to bring in a technology solution, a modular approach would likely be key. If they’re able to, for example, start with a fund accounting system and then they want to bring in a portfolio monitoring solution, deal management solution, and then an LP reporting solution – finding an all-in-one solution that they can take on gradually would make sense. It would allow them to form a relationship with one vendor and then move their tech stack along at their own pace, based on their own maturity.
One piece of advice I’d give smaller GPs as they think about how to build a data strategy, is think about what information you actually need. What we see with lots of smaller GPs is that they think they need to analyze every single line item in a P&L to be able to really appreciate the data and understand where it’s coming from. What you see at larger organizations is that they’ve stopped looking at the more granular levels.
There are obviously different priorities there – people have different expectations – but you have to be very realistic about what’s actually going to drive value and what you really need to be aware of. So be realistic about what you need from your data strategy. Be realistic about what you actually need to report on. You have to be able to understand what’s important – is it my revenue, my costs? Where am I really trying to improve?
Learn more about how global consulting firm Lionpoint Group is transforming alternative investments and discover how Allvue’s next-generation investment software solutions can help empower superior investment decisions.
Learn more about how Allvue can help your business break down barriers to information, clear a path to success and reach new heights on the investment landscape. Fill out the form below and we’ll reach out to talk more about how we can help your business.