Technical Due Diligence is a label used to describe the evaluation process completed before an investment decision in, or acquisition of an organization, and that process will vary greatly by company stage, investment size, and ownership stake.
The recent explosion of healthcare startups, specifically technology-enabled healthcare firms (who isn’t one?) has lead to an accompanying increase in diligence efforts, the need for review assistance, consulting firm specialization and some welcome maturity around the process.
Because of this reality, many VC/PE firms have actively differentiated themselves by rounding out their teams with skilled operators and deep technical leaders to assist with diligence (and ongoing ops); while others still prefer to contract with one of the many firms providing this service.
A somewhat provocative take on early technical due diligence for growth-stage companies is that the process was viewed as a tactical step to get out of the way, rather than a key driver of valuations and investment decisions. One could argue that even today, in too many instances the healthcare IT diligence process is driven by compliance activities (HIPAA anyone?) and substantial box-checking, rather than being viewed and managed as a strategic asset in itself (more on that later).
The process, at its simplest, can be boiled down to a series of standard document requests, followed by an on-site visit with team members and a detailed agenda, and ending with deliverables (findings) generation. It’s tempting to standardize/templatize this process, and although the details of each diligence effort will vary, the foundation of a tech review should indeed be straightforward, easily articulated and defensible (see example here).
The purpose of this post is not to itemize those standard tasks, nor to argue points around a diligence agenda, but rather to share five habits that have made a positive, material difference in the quality of the process and outcomes for me. These are details not always found on a diligence checklist, but are behaviors that I have used and appreciated from both sides of the diligence table; in performing the review and responding to same from within organizations, so let’s get started.
Whether it’s for a growth-stage company or long-established enterprise, the odds are high that the technology team will already be >100% booked with day-to-day operations, product release commitments and all the rest. While the CEO and CFO may have budgeted part of their calendars to the roadshow and diligence process - that’s a rarity for others. The diligence process will, therefore, be over and above everything else on their plates, starting with document aggregation and moving through the site visit, any follow-ups, etc., so:
The first few minutes of a site visit will often set the stage: will it be a collegial and rich discussion of the decisions made by the team (with better quality findings); or devolve into a series of yes/no/limited answers and the feeling that the reviewer is pulling teeth, because every additional item shared by team members is considered a potential risk?
Very often it is these learnings found in informal, unguarded conversations, off the standard agenda, that are of most value to the client and management going forward.
Whether for a growth-stage company or established enterprise, every technology team should be somewhere on a path of sorts, passing through different stages of maturity as the team scales and investment dollars become available. Making the trade-offs on how each hard-fought IT dollar gets spent is one of the tricks of technical leadership.
Diligence should therefore not be viewed as an audit with deficiencies or findings, but rather a snapshot of a maturity path, and the company’s particular position on it. In question form: (a) where is this team on the standard maturity path, (b) do they have the right priorities for the limited budget they are operating under, and (c) can one see them making smart technical and investment decisions in future?
To better make this point, which of the following statements will have more value to the investor/client?
The former approach, (the sloppy approach), has limited value and is an audit-only perspective. The latter approach acknowledges the current gap but adds a statement on the CTOs ability to juggle limited funds as well as appropriately plan for the long-term. The first approach presents the CTO as lacking in some respect, while the second (appropriately) hands out praise.
A formal technology diligence process will usually cover a multitude of discussion points, large and small, within the document review, on-site meetings, and follow-ups. A typical deliverable may be hundreds of bullet points.
I’ve reviewed dozens of these historical artifacts filed away within companies, and in a surprising number of them, it is hard to find any key considerations that would have been specifically called out to the client to influence their investment decision. In these documents, it’s clear that the tactical consultant template entirely drove the deliverable - for example all 50+ findings/remarks in the report given the same priority. This circumstance stems from the unfortunate practice, somewhat common in consulting rounds, of valuing deliverables by gross weight rather than their actual utility to the client.
While a few VC/PE clients may still view technical diligence as a checklist item to get through before returning to finance optimization, most investors today are sold on the value of the process. They will fully expect to hear these types of key findings - in fact, will be prepared to change the terms of the deal based on them.
Some examples? Unanticipated key considerations may include factors as varied as a simmering leadership team battle that comes to the surface during the on-site visit; verbal disclosures that are clearly at odds with the documents prepared for the diligence process; or quite often today, core systems and capabilities that stand on a foundation of bubble gum and bailing wire, despite what the charismatic CEO stated in the pitch discussion. These key learnings don’t always fit into a standard diligence template, nor should they be forced into one, or cut from the report for convenience.
Health IT organizations of every size, from 5 developers to the enterprise, have a wealth of options available to them that did not exist a few years ago. The advent of the cloud (and BAA agreements), new SaaS offerings, open source frameworks and APIs of every variety enable lighting-fast implementations of novel ideas.
However, this richness brings new challenges in establishing value as well as technical risk for each organization. The health IT future is not one of large-scale IT projects with a limited number of known vendors, but rather a process of complex orchestration between internally developed IP, established players, emerging entrants and dozens of external services to build everything required to execute and scale.
This trend will mean that for many diligence efforts, there will be less time spent inventorying large systems and vendors, and more focus given to complex architecture and detailed dependency graphs to fully understand how core capabilities come together and are managed. Target companies will vary wildly in their ability to articulate this list of dependencies during the review, which is in itself a critical learning.
Questions will include: Is this particular orchestration of services unique in any way, or could it be replicated easily by others? What are the risks for the individual components? Are we paying for these services and do we have SLAs? What are the maturities of the new entrants leveraged here? Can this architectural pattern scale, and to what volumes? Are there potential issues with the open source licenses being leveraged?
Perhaps most important, and validating #2 above — Does leadership have a realistic view of the myriad risks included within the architecture today, as well as a strategy to replace those components over time as the organization scales?
Finally, I’ve used the phrase “hold an ownership mentality” for many years, and although it generates curious responses on occasion, it’s how I always approach the work, whether it’s a diligence review or developing a second opinion on a product strategy. It comes down to the following: How would I feel about this client deliverable if I was personally accountable for executing on it, with these recommendations, this team, technology stack, etc.? Would the document need to change? Would I be backpedaling to try to weaken it in any way? Are there key considerations not mentioned here that should be added now?
It’s often assumed that consultants are already operating under this tenet, but in my experience that’s frequently not the case. It is relatively easy to drop off a report and invoice, then move on to another engagement in the comfortable knowledge that someone else will be held accountable for the execution phase; but it would be a different story entirely if the client turned around and held the reviewer responsible for executing on those very same recommendations.
The client deserves to get the most honest take possible on the company under consideration, and putting oneself in the hypothetical role of execution lead for those findings is a good way to ensure that end.
Scott Booher is Principal of HIT Reboot. More information on HIT Reboot diligence services can be found here.