Hi there – I can't make the call today so I'm sharing this summary of months of dialogue and discovery. 

Assuming that "Do no harm" should be a first principle to guide this work, there are important ethical consideration that are NOT fully addressed by topics of security, regulatory compliance, or even traditional notions of "privacy" and consent. This is especially true given the prospect of integrating complex systems in ways that can collapse contexts and make possible new, unanticipated behaviors – as well as the serious, well-documented problems of algorithmic decision-making and predictive analytics. If we take "do no harm" seriously in the process of system integration, designer intent is not enough to address ethical concerns, and neither are data use agreements. (See here for an example of one of many discussions about these issues.) The bottom line is that this work carries a very high risk of causing unanticipated harms, even unintentional harms, even harms not perpetrated by bad actors, even harms from actions that users might technically consent to at one point in time. These risks need to be considered in concrete terms at every phase of the design process.

I think this realization has a range of implications, ranging from constitutional (this work should be framed by a statement of values, and conducted in ways that "check" the work to keep it aligned with those values) to functional (i.e. a 'proof of concept' should be expected to prove that it is capable of basic functions that enable risk mitigation, monitoring, harm reduction, etc).

It's been suggested that the user stories that guide Project Unify should be elaborated to account for specific kinds of harm scenarios. I think that's a good idea. It should probably happen through some kind of 'ethical audit' in which specialists who can speak more specifically than I can to particular kinds of harms at risk in this work. 

In the meantime, I've conducted a preliminary review of literature in the field, and engaged in some initial dialogue with ethical experts, and now I can offer some examples of the kind of design considerations that may very well need to be included in a 'Proof of Concept' to ensure that the would-be concept is ethical by design:

  • A user should not only be expected to consent to sharing their own information at one point in time, but should be able to review the record of such data as it is exchanged through various phases, and should be able to revoke consent after the fact. 
  • Users should be able to review clear documentation of the way their data has been coded and integrated with other data
  • Users should be able to object to classifications, logged records, and methods of use.  
  • Users should be able to request changes to inaccurate data. And users should be able to request deletion of such data. 
  • Users should be alerted to the possibility of false positives and false negatives in identity matching, and to be able to review and contest such matches. 
  • Institutional users should be able to monitor and comprehend the ways in which aggregate data is used in algorithmic decision-making processes and predictive analytics.
I assume these could be made more specific and/or tweaked, expanded, etc; i just wanted to offer a set of examples of what might be expected to come out of an ethical design process that aims to enable data to responsibly flow among systems operated by different institutions in different contexts. I have attempted to model these particular design objectives off of the principles that structure existing data protection frameworks such as the GDPR, though there may be more appropriate points of reference elsewhere and within our field. I’d also suggest looking to groups like Our Data Bodies, Responsible Data Forum, and others that specialize in tech ethics and equitable design. I'd be glad to make introductions and/or facilitate such conversations.
 
Eager to see how these important considerations can best be applied to this work.
 
Sincerely,
Greg
bloom@gregbloom.org

You need to be a member of The NIC Collaboration Hub to add comments!

Join The NIC Collaboration Hub

Votes: 0
Email me when people reply –

Replies

  • I welcome and encourage your do-no-harm focus - this should be the work of everyone in the POC (and beyond).  This is closely embedded with racial equity work which is also all of our responsiblities. That being said, someone has to hold up the flame to illuminate the paths forward. We owe you dearly for being that persistent voice - So let's continue to have this discussion ... and encourage other voices.  The POC is part of our National Action Agenda - which we launched last Friday 8/14.  This initiative is a good place to engage people who are interested in and committing time and attention to data sharing across SDH factors, sectors and domains. Let's discuss. 

    As for CDPR - let's learn from Europe (and CA).  I've invited Alfonso Mantero, who leads the European Social Network to participate in the Action Agenda and symposium - so that is a good place to learn and share, i suspect. 

This reply was deleted.