Hi folks - I've really appreciated listening in on the calls over the past few weeks. Thank you all for your important work.
I have concerns about how to address the potential harms that can come from this kind of interoperability.
Consider this article describing a (long overdue) concept called 'abusability.' This is primarily about Silicon Valley's platforms, but applicable to healthcare and government. And the article describes security issues, but it's not just about cybersecurity. It's about anticipating potential uses that might cause unintended consequences.
It would seem that protocols for data sharing between health institutions, human services, and government agencies have a significant potential for abusability.
How do we ensure that people's personal information doesn't, say, get into the hands of overreaching (and potentially unlawful) authorities who might break up families because of a data point?
Given that we know there are systemic patterns of inequity in both the health system and key government systems (say, criminal justice, immigration, etc), how can we ensure that establishing more interoperability among these systems doesn't also unintentionally increase inequity? (See Virginia Eubanks' terrific Automating Inequality for more examples of these patterns.)
It seems like these questions sprawl far beyond matters of cybersecurity, and they might not be entirely encompassed by consent and permissioning systems. These seem like the kinds of use cases that should be at the center of these conversations: if we're going to promote interoperability, how are we doing so in a way that deliberately reduces the potential harms thereof?
~greg
Replies
I just came across these resources via the Responsible Data Forum:
https://www.tech-transformed.com/product-development/
A 'consequence scanning' kit: https://doteveryone.org.uk/project/consequence-scanning/
This is designed for 'product management' but i assume it could be adapted to address interoperability capacities.
To be simple about this, designing against abusability and potential harm is about balancing the frame, so that in addition to "user stories" about the good outcomes that we want to enable, we're also articulating potential unanticipated outcomes that we want to mitigate.
This publication from the Haas Institute is not directly connected to the dangers that can come out of data sharing. It is about "racially exclusionary policies and practices" of disenfranchisement and displacement in the Bay Area. Yet, it is connected in the greater scheme of things.
"The rampant displacement seen today in the San Francisco Bay Area is built upon a history of exclusion and dispossession, centered on race, and driven by the logic of capitalism. This history established massive inequities in who owned land, who had access to financing, and who held political power, all of which determined—and still remain at the root of deciding—who can call the Bay Area home. While systems of exclusion have evolved between eras, research indicates that “it was in the early part of the twentieth century that the foundation for continuing inequality in the twenty-first century was laid. By building inequality into the physical landscape, cities added ‘unprecedented durability and rigidity to previously fragile and fluid [social] arrangements’.”
"Racial residential segregation in the Bay Area is not natural or simply a matter of individual preferences and actions. Today’s patterns are partially the result of a wide range of coordinated tactics used to perpetuate racial exclusion prior to the enactment of state and federal fair housing legislation. These exclusionary tactics can be distilled into the following types: state violence and dispossession, extrajudicial violence, exclusionary zoning, racially restrictive covenants and homeowner association bylaws, racialized public housing, urban renewal, racial steering and blockbusting, and municipal fragmentation and white flight."
https://haasinstitute.berkeley.edu/rootsraceplace/introduction
Navah Stein just posted this resource about anti-racism, which gets at a primary motivation for my concern here: https://hub.nic-us.org/forum/how-to-be-an-anti-racist-ibram-x-kendi
thanks greg! i just responded on that thread. sorry for the redundancy!
Greg - i think this is an important topic that I'd like us to get into. It might be something that we can raise on this coming Friday's call (9/20) and then schedule a more in depth call. Let me know if you think you can attend Friday and be willing to introduce the topic (or on another call)
DAniel
Hi Daniel - thanks for the opportunity to talk about this on last week's call. I won't be able to make this week's, so i wanted to follow up on it here.
I suspect that some of the issues here may, in large part, fall outside of the realm of technical standards. I think these are largely questions of governance – and not just about bilateral use agreements between an institution and a vendor. For example, we've learned in various ways that, when it comes to data use, serious harm can come from unanticipated uses that might not be addressable in advance by a traditional contract. Also, we know that some of the primary use cases discussed here involve "crisis scenarios" that might one way or another override previous agreements; and we know that some of the institutional actors involved have at times overstepped their authority, or operated aggressively in situations where federal and local policies conflict. We even know that "de-anonymization" is increasingly possible especially in scenarios where data is suddenly able to be joined with more kinds of data.
It's worth repeating that I'm not arguing against interoperability per se here, but rather that risk mitigation and accountability seem like they should be design principles that are as important – if not more so – as "innovation" and efficiency. And I wonder how these technical conversations can also be bound up with governance conversations about responsible use and recourse. These are questions that communities will need to grapple with, and to the extent that this work makes those questions more urgent, it should also focus on helping communities design institutional means to answer those questions.
All that said, I've spoken with a few people who do specialize in responsible data use, and have at least a few examples of design concerns that may have technical implications.
Such as:
- Mitigation against false positives
- Encoding data to be destroyed by default after use
- Mechanisms to trigger renewal of consent
- Monitoring of data access and use, and alerts to notify
- Mechanisms to enforce sanctions on actors that have violated use agreements in the past
There could be others! The fact that nobody on the call last week seemed to have much to say about these issues could mean that I'm misunderstanding the nature of the conversation... or it could mean that there are important perspectives that are missing from this conversation.
And when it comes to sensitive data about vulnerable people passing institutional boundaries from healthcare to, say, the criminal justice system, I think that's a matter of some concern.
~greg
OK, yeah, I think i can make Friday's call this week. thx!
Greg - thanks for raising this topic. It has been percolating the last several months, and more recently too. I think it is a critical issue and one that we should talk about - there certainly are concerns, and pros/cons of interoperaiblity and data sharing. Is the potential benefit worth the risks, especially at this particular moment in history. I don't know - but I do think it is worth a vigorous exploration and discussion. I suggest we put it on the agenda and schedule it for one of the upcoming weekly calls.
Daniel
Thanks for your response.
To be clear, my question isn't intended to go so far as to suggest "maybe we shouldn't do interoperability at all." Rather, I think that proponents of interoperability should take the potential harms as seriously as the potential benefits; one way to do this might be to establish risk mitigation as a 'first principle' of design. Which suggests that some of the use cases considered in an initiative like this should be 'anti-patterns' – undesirable scenarios that designers are challenged to account for. And it suggests that we should look beyond 'cybersecurity' and 'permissioning systems' for answers; what about monitoring, intervention, and recourse? etc.
I would be glad to ask around the data ethics community for help with articulating some of these scenarios.
(Re-reading this now, i can already anticipate that this is a challenge, because it seems like the conversation would quickly veer out of the scope of, say, field-mapping. But if it's not something we can fully address here... then, where?)