Ashabil Rizhana
Operations Manager
Grahame Grieve, known as the “father of FHIR,” stops by the Digital Health Hackers podcast with an update on global adoption and we look at some of the ongoing stumbling blocks to universal data sharing. He architected healthcare’s best shot at data interoperability when he founded HL7’s Fast Healthcare Interoperability Resources (FHIR), the leading healthcare data exchange standard of the future. Grahame is the FHIR Product Director at HL7 and project lead of the FHIR Core Team.
Sidharth Ramesh: Thank you for joining the interview with us today. So I have a bunch of questions. But before that, what have you been working on? I'm assuming that the new R6 release has kept you busy.
Grahame Grieve: We've been working on R6, but at the moment our life is consumed by just supporting implementation in the whole FHIR ecosystem, which is just growing. I can't keep up with it. We've also been doing a lot of focus on data quality and rounding out tools to support implementation. I mean, we have a dream for how much interoperability we're going to get. But in the real world, we're held back from that by a myriad of factors, but we want to make sure that we're not holding it back. So a lot of the work I'm doing is under the hood, quality, infrastructure. If your ducks are in a row at that level, then other people don't have any excuse.
SR: How did FHIR become the way it is today? FHIR is the most popular healthcare IT standard today. Can you think of a moment that led to this?
GG: It's a big, long, continuous process from the day that I started working on it by myself, to the first few people who got on board, to the first committee that decided to spend time on it. There's a whole parade of milestones, but three really stand out. The first is when the US vendors chose to use FHIR for the API that they were required to provide patient and provider access to the data. It's probably the most significant technical milestone.
Another one is when I convinced the HL7 board to adopt the formal open source license for the work, and my life really never has recovered since we've made that decision. I mean, some of the old guys on the board were still in the old commercial mode, they were like, you've wrecked everything when they lost the board, that was the day.
And then another one really stands out. It wasn't very significant technically, but it was just massively significant in the news, when Apple decided to adopt FHIR as is, without doing any Apple customizations. And that was really big.
SR: It's interesting, you are from Australia and you were working there when all of this happened. But then somehow all of these U.S. companies, Epic, Cerner, Apple, decided to go ahead with FHIR way before Australia. Is that right?
GG: Oh, yeah. Australia is dragging the chain. We finally got into it last year, trying to catch up. But a lot of it's to do with cycles. In fact, Australia is kind of unlucky because the impetus to do FHIR came out of looking at the Australian national program, midway through its big refresh I thought - this is a disaster, the standards aren't suitable! And so I go, well, they have to do new standards. So I went off and did new standards. But it was too late, they're stuck for 10 to 15 years on the choices they made. And so that's where Australia has been, maybe a few years longer than they should have been. But whereas U.S.A. and other countries, landed at just the right time because they were about to start something new that the existing standards weren't any good for. But it was always an international effort from the very beginning done through the HL7 community. The fact that I happened to live in Australia is not really significant.
SR: The U.S. adopted the REST paradigm of FHIR primarily and they went the SMART on FHIR route for application interoperability, right? Previously, most of the HL7 standards have been around one specific paradigm, if it was V2, it was around messaging, if it was V3, mostly document. So what was the history behind FHIR supporting almost all these paradigms? Why not just stick to one?
GG: I've been doing data interop for vendors, customers, and national governments for a decade. The one thing that really stood out to me is there's a huge price to transform data from one paradigm to another. It's hard enough to transform syntax, but they're much more than syntax and each of the paradigms has reasons to exist. Having a vertical stack standard that was specific to one paradigm was clearly a big tax on the industry as a whole and was clearly going to hold us back.
In 2010, I realized that we'd lost our way and the paths we were on were not suitable for what was coming with the web and healthcare. And I just said, if I swept everything off the table and started again, what would it look like? Then I cherry-picked all the bits that I thought were the best, from V2, from DICOM, from CDA, from what people were doing in Thrust, from openEHR, but certainly being able to reuse the same content seamlessly across those different interop paradigms was key to me to make it worth people engaging with.
SR: I've been reading your book, The Principles of Health Interoperability, and there is a really interesting diagram that always catches my eye, it's a picture with multiple different nodes all talking to each other versus all the nodes having one common standard (image below). You also talk about how the cost scales with the number of bespoke interfaces vs a standard interface. So my question to you is, keeping these particular principles in mind, what is your reasoning behind FHIR going with the 80-20 rule?
GG: The thing is that rule is often misconstrued. It's not 80% of the data or what 80% of the systems do. When you actually get involved with these project, what you find is some business analyst gets in cahoots with some doctor who wants something that nobody else does, they decide that they're going to do this, and then they get themselves in a position of power, they've got their hand on the budget decision to buy. Then you say to them, well, you have to have that in the standards, so they go into battle to get stuff into the standard. Then everybody does it, and you end up with these standards. You have hundreds of elements, and only one person's ever going to use each of them.
You've got to have a way to hold that back and say, no, the standard is for the common, the standard stuff. So the 80-20 rule is not about 80% of the data, it's about, you're focusing on the things people agree about. And there's some domains where 80% of what 80% of the systems agree to is 100% of everything because everybody does it the same way. Like diagnostic reporting, there's really not a lot of variation in diagnostic reporting, because we all have been doing it for decades. But there's other things like care planning, care plans, plan definitions where people are all over the place, the myriad complexities is mind-blowing. So we said no, the standard is what everybody does, if you want your own stuff, you've got extensions and profiles to add on your stuff, but let's try and keep the core simple. It doesn't entirely work, and extensions aren't everybody's favorite. And so the key of the 80-20 rule is, and it's often misconstrued, it's not, as I said, 80% of the data, it's about keeping people's esoteric requirements that only they have out of the standard.
SR: I agree that FHIR is easy to get started with because of this one principle, that you decided not to put all of the details in there, but take something like observation, you see different countries doing the same observation in slightly different ways, and each profile has its own definition of how blood pressure should be modeled. Does that concern you?
GG: Yes, it does, and I think that we have not been as effective as we wanted at harmonizing that work. Something really basic, like vital signs or blood pressure, one of the things that is really obvious is they're starting to flow between patients in specialist healthcare, and one of the characteristics of the consumer software is it's not jurisdictionally bound, and neither are patients. We've said there's going to be some consistent informatics elements in all the vital signs observations, a run code, a single unit, a single category, so that you can find them. Aiming really low here, we've had a feral battle over even those things. We still have people today who ignore that rule and go, we know better, so it sounds really dumb when you say everybody's doing their own thing, but in order to stop them from doing their own thing, it's really hard, with life cycles, politics, and business requirements. So I don't know whether we could have done better.
I just did a big survey of the community. We had hundreds of responses, and 10% of users told us in the survey, that they don't validate their resources. So we've moved the community to where 10% validate, don't validate, and the rest do, and then we provide them open source, free, high quality validation, and that is an incredible driver for consistency, and in some ways what happens is you get more consistency at the lower levels.
SR: The fact that people are taking validation seriously, and the tooling is mature enough to support that, is definitely a big feat, but again, there are some holes, right? Not all of the FHIR profile can be validated just using a computer, that is the semantic meaning of what people are putting in, and so on as well.
GG: Sure, there's certainly things we can't do, and every time I make a new rule, I realize there's a way that I can validate something that I haven't been validating, and I add it to the validator, I'm going to get squawks from people going, but it was valid before, and I said, no, it's never valid, you just weren't checking it before. So yeah, totally, it happens, there's a hole, it's that the validator says it's valid, I don't need to think about it anymore, and of course, as you say, the validator will never get to the point of being human, but maybe we'll start throwing AI in there. Maybe, large language model FHIR validators are next.
SR: My next question to you is, there is a lot of FHIR written into the law, so for example, the UScore profiles in India, and a lot of countries have the FHIR R4 version written into their legislature, so what do you think the resistance to change to the next version of FHIR, R5 or R6, would be, given that certain things are already in the legislature?
GG: To my knowledge, most countries have not written FHIR into their law. Instead, they're in regulations, which can be updated. We have active discussions with a lot of those government departments, a lot of them have a designated individual or team who interact with the FHIR community and represent the interests. All of them know that at some point we'll be putting out R6, they've collectively decided, with discussion with us, to let R5 go by and not engage with it, unless there's a few particular projects that have particular drivers for R5 and R. So we just have decided, corporately at HL7, that we're going to back off and let R6 percolate a little bit more and get it much more correct, because the market is getting a lot less tolerant of an ongoing stream of development, and we really need to finalize as much as we can. Then we'll be working with our partners, government partners, and big vendors, to migrate and to think about what a migration strategy looks like and get everyone to migrate. But it'll take years.
SR: I have a parallel to draw here in how the openEHR community does versioning. So there is the content-level versioning of the actual data models, and then the API-level versioning of how to interact with these models. In FHIR, it seems that both of these are always released together, the data models as well as the APIs. Do you see that as a problem? Are there any plans to somehow release data models without changing the API for a data model?
GG: Well, the API is pretty settled now at the REST level, but I think it's misleading to think that the API is particularly separated from the content that goes across the API, because you can drag between features and content, and the content is still changing as people feel out their way and the domain. Designing good resource content is a compromise between simplicity and complexity, and trying to capture the right thing. So those things have been much slower to settle than the API, which has been relatively stable since 2015 and is frozen in stone now. But that doesn't mean that the API is effectively settled because people are still changing with a resource, with a status element, and the status includes the unknown status or not. If that changes, that really changes the way the API works. I've never believed that fixing the API without fixing the content is a solid goal. Generally, the more stability you have, the better, but, you know, they affect each other.
SR: A lot of people are using FHIR natively for storage. I know you're against this, and I've seen a lot of your other talks where you've explicitly stated that FHIR for storage may not be such a good idea, but we are seeing this pattern where people are separating the apps from the data, and then they have this data platform, and they build their apps on top of the data platform. What are your opinions on this kind of usage of FHIR?
GG: It's not that I think it's a bad idea. We designed FHIR for Interop, and so it's denormalized with boundaries that are chosen by considerable community discussion to represent resilient, robust units of exchange. And those units of exchange are not the units that you would choose as storage units inside your system, and we don't consider that. We produce something that's optimized for exchange, so should you use that for your internal storage format?
The first question is, how much is your system going to be exchanging, because the more your system is exchanging, the more that's a worthwhile compromise. If it turns out that your system is designed entirely in terms of FHIR API, then interoperability is an inherent property of your system rather than something that you have to do something about. It's there for free. For some business applications, that alone is compelling enough to make up for everything else. I think in the end that will become compelling, because people will be like, why have you got an application that isn't inherently interoperable? Why is it something we have to pay more to add on to it later? That's just dumb now.
As an architect, there’s a strong inclination to use a relational database that precisely captures your requirements. However, this approach has its drawbacks, including lengthy reviews, intricate designs, and slow development cycles—adding a single data element might take as long as two years in a typical enterprise system. For instance, if a client urgently needs a new data element by next week, in a traditional setup, you’d have to explain that the earliest delivery would be in two years. Conversely, with a RESTful paradigm like FHIR, you can simply start using a new data element almost immediately, offering a much more responsive solution to client needs. This highlights why FHIR is often seen as a superior choice when rapid implementation is critical.
Nevertheless, most real-world applications fall somewhere between these extremes, leading me to often recommend a hybrid approach. This might involve storing everything in a FHIR data lake for inherent interoperability, and initially dumping all data into FHIR resources. However, certain tasks, like generating a comprehensive work list which could require querying tens of thousands of resources repeatedly and quickly, are more efficiently handled by a traditional relational database. Such a hybrid system allows for rapid updates and flexibility in managing data that requires quick access and manipulation, while still leveraging FHIR for straightforward, interoperable data storage. This approach combines the best of both worlds, supporting both rapid change and detailed, controlled content management. It's not the only solution, but it’s a practical compromise for handling the inherent challenges of enterprise information systems.
SR: I think we are seeing a lot more cloud providers provide this, like Google, AWS, HealthLake, all of them provide a FHIR native store, what would your answer to this question be if we state that at some point, these cloud vendors are going to optimize the hell out of these FHIR stores, that it's going to be as fast as traditional database queries?
GG: They have optimized the hell out of them, and they're stunningly fast for what they do. I was watching a Google guy write an SQL query against a FHIR data lake full of several billion FHIR JSON resources, he was ad-hoc-ing SQL with FHIRpath in it and it was taking 0.01 seconds to execute. Mind-blowing when that started happening, and it certainly changed my view of optimizing what's possible, but still at the end of the day, a crafted, optimized, relational database, also using those same efficiency tools, isn't always going to be faster than a RESTful API using those efficiency tools.
There are people running national systems with a pure FHIR backend on a HAPI server on a single box, and they're running national clinical information systems like that in Africa. It's astonishing, no one would even try in Europe or America, but they're just doing it in Africa, single server, national system, it's just amazing, and it works. So maybe the optimizations are enough. At some point it doesn't matter, the difference between 200 milliseconds and 10 milliseconds, although it's technically 10 - 20 times faster, it's not noticeable.
SR: What's your future, what's in it for you, what are you doing?
GG: I said I would do this until my family's healthcare got better. All of us have experienced the discontinuities and the broken meta-systems in healthcare. All of us have worked with clinical providers, GP specialists, and surgeons, who do their best to make sure their bit is world-class, but those are not the problems, the problems are between the world-class bits, and that's the same in every country I've been in, it's a consistent experience. So it has been my plan to gradually, it's not working very well, but to gradually move away from focusing on the technical infrastructure and making sure those bits are in place, try and leverage, I have a bit of a reputation, so try and leverage that into putting more pressure on the system as a whole to start addressing these issues.
I'll give you an example. I have a friend who found a lump in her breast, and she went to a GP, specialist, radio, some imaging, back to the specialist, back to the imaging, back to the specialist, decided they're going to do radiotherapy and then chemotherapy, so it schedules that. By the time radiotherapy started, it was seven months, and none of those things needed those gaps. Those were the gaps built into the system, not out of clinical necessity, but just out of the lack of ability to coordinate. So, for me, you know, I'm focused on means to find opportunities around the world to start building coordinated care based on the type of the ecosystem that we've tried to enable and starting to push people to say, let's actually give people real care. I think that story definitely struck a chord with me, the fact that you kill people by not doing this properly.
Unlock the secrets to mastering FHIR in our exclusive, free live webinar. Discover why now is the prime time to excel and how you can become an expert.
We use cookies to understand how you use our site and improve your experience. By accepting our use of cookies, you consent to the use of cookies in accordance with our privacy policy.