Markle Director Says PHR Fragmentation Is a Problem


Author: Cindy Atoji

Personal health records (PHRs) could be the cornerstone in a national strategy for health information technology, but PHRs continue to evolve, both in concept and practice. For the past three years, David Lansky, senior director at the Markle Foundation and director of the Markle Personal Technology Initiative, has been leading a public-private collaborative of healthcare stakeholders. Their charge is to help define the role of PHRs and to make recommendations for privacy and security principles, interoperability guidelines, and other frameworks. Digital HealthCare & Productivity recently spoke with Lanksy about his work.

DHP: What’s your biggest worry so far, and what needs to be done to permit this flow of personal information?
Lansky
: Our current concern is that the emergence of the PHR has so far mimicked the fragmentation we see in the medical system as a whole. Providers are offering their own portals; commercial entities are offering PHRs that are unconnected to doctors and plans, pharmacies offering their own version of PHR; Medicare has its own portal.

So we’re seeing a set of digital silos that map onto the institutional silos that we are all painfully familiar with in healthcare. If a patient wanted to access, aggregate and apply some intelligence to all of their health information, even if you could download it and put it into a common platform or common product environment, it would [currently] be dissimilar and chaotic.

We appreciate all these electronic tools but it doesn’t add up to an empowered consumer until the network environment permits information to flow into a tool that the consumer controls and manages.

DHP: What about privacy and security issues with PHRs? Addressing these concerns seems vital for any system to succeed.
Lansky
: Consumers need to feel confident that their information will be handled properly because there is a set of rules in place. They need to know that the network authenticates users in a consistent fashion so that it minimizes the chance of identity theft. There can’t be a hole in the network or someone on network who has no disciplinary policies for violation of data privacy.

Patients need to know that the network as a whole has an audit trail. So there is a set of nine or ten rules that we think every good network citizen needs to follow to create trust. We have a workgroup with 40-50 people who are developing a draft set of preferred practices. This fall, we’ll be publishing at least two documents that represent the work of this group and recommending a set of network preferred practices.

We have a great ethical and practical responsibility to build a robust system that doesn’t jeopardize the confidentiality of people’s health information. The way to do this is not by writing more laws after the fact, saying, ‘Well, we built this great IT system, but sometimes bad things happen, so if anything bad happens, we’ll punish you,’ but instead weighing the privacy objectives and realizing these are partly dependent on the technology you select and build. The technology and policy really go hand-in-hand. We need to choose and build information systems that protect privacy and deliver information where it’s needed.

DHP: How can content in PHRs be audited and assured to be accurate? Some worry that the opportunity for error is on patient-input data.
Lansky
: There’s a school of thought that says that PHRs are meant for the consumer to use -- it’s their tool and they can do what they want with it.

Another school of thought that says that PHRs are a platform for interoperability and from the PHR, patients will transmit data back to health professionals. In this scenario, data has to be date-and-source stamped or digitally signed in some way so it can’t be altered and it’s an authentic transmission. There is work going on around stamping data elements to guarantee they haven’t been altered. I think this is legitimate question but it is so far off developmentally from where we are in this country that I think it’s kind of a red herring at this point.

I think it’s worth it to try to develop date-and-source of data elements so we can verify or validate where items came from and whether they’ve been altered, but I don’t think it’s a real significant issue at this stage.

On the interpretation side of it, patients may find themselves recording a lot more data than any health professional wants to see. There might be volumes of digital data being captured in the home, but health professionals will only want to see a compressed snapshot of this. The issue will be less the authoritativeness of the raw data than the ability to translate it into something useful in terms of a clinical decision-making encounter.

DHP: Markle’s Connecting for Health initiative has done a lot of work, defining a common framework, establishing a federated, decentralized model based on a “network of networks” and testing two prototypes. What’s next?
Lansky
: A year ago, we published a common framework, articulating a common set of technical standards and policies. So I think that basic architecture is well understood. But we have felt there has not been enough done to publicly define the rules of the network and the policies that would govern good behavior on the network. We continue to say that that’s the role that the national government needs to step into, because no one else really has the authority to create an enforceable policy regime except the government.

This is not a case of “if you build it, they will come.” People are building it, and they don’t come very fast. From a purely business point of view, our experience is there’s not much reason to share data outside of the enterprise, and certainly not much reason to go to a lot of expense to create an interoperable, information-sharing environment.

People still see a lot of health information as a proprietary advantage. Until there is an economic reason for a health enterprise to share information with a competitor, they are not enthusiastic about getting involved with federations or building capabilities to do that. The biggest barrier is money.

DHP: What’s the solution?
Lansky
: There are some worthy ideas around pay-for-performance. But I think where they are going to fall short is that it is not about the boxes – it’s not about getting more technology in doctors or hospital offices. It’s really about the way we deliver healthcare and access information. Payment reform is needed, which is an extremely complex, difficult and almost untouchable issue.

DHP: On the technical side, what work are you doing on technical standards for PHR interoperability?
Lansky
: For the most part, we’ve felt that the standards development work as it applies to electronic health records (EHRs) covers 70-80 percent of the ground that’s relevant for PHRs, at least for now. The medical data and codes in the EHR are also relevant for the PHR, and we applaud people working in the EHR world for developing those standards. For the most part we think they will be very serviceable in a PHR world as well. But there is a big standards gap remaining as far as using PHR or EHR to document a patients’ report of their own health.

Want to read more expert articles like this? Click here to subscribe to Digital HealthCare & Productivity.

Click here to log in.

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1

White Papers & Special Reports

This Bio•IT World Briefing On “Next-Generation Sequencing,” underwritten by GenomeQuest, Inc.,
presents a selection of feature stories, interviews,commentaries, conference reports, and editorials on the emergence, opportunities, and challenges posed by high-throughput sequencing. Covered in this collection: the launch of new platforms from Applied Biosystems and Helicos; new applications of nextgen sequencing; the rise of personal genomics; and informatics solutions to vexing problem of managing the vast volumes of next-gen data. Download now



SGI's Meeting Today’s Computational Needs for Science
The quest to better understand disease mechanisms and find new treatments is driven by new laboratory technologies and ever-more sophisticated modeling and simulation efforts. As such, life sciences R&D investigations increasingly are relying on more powerful computing resources. The challenge is how to accommodate the broad mix of applications.

Addressing this issue, this paper produced by the Bio-IT World Custom Publishing Group discusses a new SGI Hybrid Computing Environment approach. It optimally uses shared memory systems, multi-processor clusters, and FPGAs to accelerate computational workflows. Download This Free Paper



SGI's Supercharging Proteomics Discovery
The deeper study of proteins and their interactions can reveal scientific information once considered nearly untouchable to scientists and researchers. Today, unprecedented advancements in computing power are enabling the creation of mounds of proteomic based data along with the accompanying bottlenecks data can create.

Rather than just “simplify the experiment” to fit the computational resources an alternative is now available with the SGI Proteomics Appliance. This complimentary white paper, produced by the Bio-IT World Custom Publishing Group, looks at ways to use the Proteomic Appliance to handle the most intensive proteomics computing tasks facing science today. Download This Free Paper



Life Science Webcasts & Podcasts

Waters

Streamlining the Chromatographic Method Validation Process

Waters® Empower™ 2 Method Validation Manager (MVM) is a business-critical, compliant-ready software that reduces time and costs required to perform chromatographic method validation by as much as 80%. Learn in this podcast how MVM streamlines the method validation process and allows the entire process to be efficiently performed within Empower 2, so fewer software applications need be deployed, validated, and maintained. Download Now


More Podcasts

Job Openings

Lilly Singapore Center for Drug Discovery (LSCDD) - Associate Director of Informatics
Lead and mentor a strong team for the Bioinformatics group at the Integrative Computational Sciences (ICS) department at LSCDD towards the development of novel algorithms, data analysis methods and software tools for drug discovery. Work closely with the Software Engineering group at ICS, and collaborate with the Discovery IT organization in Europe and USA. For additional information, or to apply visit: LSCDD

Lilly Singapore Center for Drug Discovery (LSCDD) - Senior Software Engineer
Join a strong team of software engineers in our Integrative Computational Sciences (ICS) at LSCDD. Collaborate with, and help develop integrated applications to process and visualize data from cutting-edge technologies used by scientists at Lilly Research Labs (LRL) and the Drug Discovery Research (DDR) teams. The Software Engineering team provides computational tools and tailored software solutions that enable the global effort of Tailored Therapeutics; ‘The Right Drug, at The Right Dose for The Right Patient at The Right Time'. For additional information, or to apply visit: LSCDD




For reprints and/or copyright permission, please contact RMS, 1808 Colonial Village Lane, Lancaster, PA;

(717) 399-1900 ext 100 or via email to [email protected].