right to be forgotten and non-repudiation

The Indieweb privacy challenge (Webmentions, silo backfeeds, and the GDPR) // Sebastian Greger by Sebastian Greger (sebastiangreger.net)

Using a social media silo backfeed in the way it is commonly implemented today may not be entirely impossible from the legal perspective, as presented in the “Rechtsbelehrung” podcast (building the argumentation on Twitter users having consented to the service’s terms on third-party data use during sign-up, informing comprehensively about it in the privacy statement, and ensuring that the implementation is 100% compliant with all applicable API, developer and service terms). Yet, as also becomes clear from the podcast, this argumentation comes with heaps of potential points of failure that could later lead to it being declared unlawful in a legal dispute (did the user really agree to this specific use in the Twitter T&Cs? were the Twitter terms really understandable enough for the user? does the backfeed solution truly adhere to every single API rule in place, e.g. almost instant deletion of mentions based on deleted Tweets?).

I have information security-related issues with the "right to be forgotten". In information security, non-repudiation is the assurance that someone cannot deny something. Systems built for non-repudiation have the ability to ensure that a party to a communication cannot deny the sending of a message that they originated.

It seems to me that the "right to be forgotten" circumvents that ability. For example someone posts racist or sexist jokes on twitter, they get called out for it, and the tweets get embedded into my blog or a major news website that is writing about the incident. With the right to be forgotten rule if the tweets are deleted then mine and all website referencing the incident would lose the proof the incident occurred. And if the tweet information was instead copied to my website or the new sites via the Twitter API, we would be on the hook to remove "the evidence".

Perhaps the lawyers who wrote up the GDPR spoke to information scientist and they have some clever way to handle non-repudiation of negative incidents but from the surface, the right to be forgotten seems problematic. Shall I have to resort to taking screenshots?

Then there is the issue of how the GDPR defines personal information; IP addresses are considered personal information. I think many network security analyst and forensics analysts are going to be panicked thinking about how they are going to an analysis of traffic flows during an investigation when the personal information has been scrubbed or deleted from logs. Data security standards such as PCI DSS which require the retention of at least a year of the network, web server, computer systems and applications logs so that the details of a data breach can be analysed for determination of cause and attribution.

I want to stay positive about all of this. For all the organizations large and small -- especially local businesses with a web presence -- that are working on solutions to meet the GDPR law and its implications, but don’t have the in-house talent, one of the first places to start is to begin classifying data. The next step would be to fingerprint and uniquely identify every user’s “critical” data and encrypt it. You may no longer be able to allow anonymity.

Copy to IndieWeb News.

Author: Khürt Williams

a human, an application security architect, avid photographer, nature lover, and formula 1 fan who drinks beer.