Skip to main content
CUC2022: Otvaranje u zatvorenom svijetu - postdigitalna znanost i obrazovanje / CUC2022: Opening up in a closed world - postdigital science and education

Cijeli program »

The Ethics of the Personal Digital Twin

‘Good work helps citizens, communities, and firms to withstand short-term shocks and adapt to long-term transformations. It supports good health and fosters a sense of cooperation and solidarity across communities, binding us together as we work towards shared goals’: The Future of Work Commission 2020

As the founder of Digisheds - a skills and employability social enterprise targeting wraparound wellbeing support at disadvantaged and excluded young people - I’ve seen first-hand the impact that a severe lack of trust in the purveyors of digital applications and social media platforms has had on the mindset of a learner, and their reluctance to interact with our training programs. We have had discussions within multiple settings with disadvantaged young (and older) people who urgently (due to extensive financial, wellness, or social challenges) want and need to participate in developing new digital skills. These new or enhanced skills have been proven to provide access to better quality jobs (as defined within the framework published by the Good Work Monitor) and income, yet many are adamant, or in some cases too afraid that their personal data once captured, will be used unethically. One of the concepts we frequently use to explain the potential our learning pathways provide for exerting greater or at least enhanced control or autonomy over their data, is that of the ‘personal or human digital twin’. We explain this as the virtual representation of all their data interaction activities captured as a digital persona, which in many cases impacts or drives their decisions in the real world. We show them, through real-life examples, how each time they enter their details into any digital tool or platform, every detail is captured. Once they discover more about how their digital activities are captured and used, then they can start to see how they can exert their own control over what, when, and how their data is retained. Within the context of our training, they begin to grasp more clearly the concept of their own personal digital twin being developed from a very early age. We explain in simple terms that this model has been extended to include humans, but that the concept originated in engineering to describe or define twin complex machines which can be developed into a digital simulation. This expanded thinking shows how this simulation can model a machines functions to monitor its past and present behaviour, and repair, correct, improve or otherwise ensure its optimal operation. It’s a relatively simple leap then to start to share details of how AI (artificial intelligence) has the potential to create the same simulations to provide the same optimal results for us as humans. Once we showcase real-life examples of how health data gathered about a real physical person (i.e., blood tests, imaging data, fitness sensors…) can help to create a better digital twin enabling us to react earlier and cheaper to current and future medical problems, the point is made. We also discuss the ethics of this approach, providing alternative and potentially ‘darker’ scenarios to help them to consider the ubiquitous distribution of digital assistants, the rapid progress of machine learning concurrent with the exponential growth of ‘personal’ Big Data. My belief is that we need to integrate this continued exploration of just how these independent trends in technological developments are converging towards the digital replication of individual human data and life history, through a blended learning toolkit. In this way we can start to impact each citizens view of their power over their ‘agency’ – enabling them to act in response to actions made on their personal data; and improve their control over their ‘negotiability – thereby influencing others’ use of their data now and in the future. My reasoning being that among the psychological effects of the digital transformation, is an inexorable increase in the proliferation of instances which artificially predict our decision-making processes, which inevitably will lead to emigration into digital functions being directly wired into our brains. In recent years, several new technical methods have been developed to make AI-models more transparent and interpretable. These techniques are often referred to collectively as ‘AI explainability’ or ‘XAI’ methods. I believe the underlying purposes for using AI - bundled with diversity and inclusion biased XAI – can be a key part of a digital inclusion strategy which is both ethical and trust-based. This approach has the potential to contribute towards bringing together the technical and ethical dimensions of HDI to better realise an integrated postdigital D&I reality. Our Digisheds model continues to integrate the use of HDI tools and methods with linked explanations of the potential positives and negatives of human decisions made during the digital development life cycle. Situated within a wider social governance and accountability framework, this approach can offer a helpful starting point for policy makers, academics, training providers and funders/commissioners who need to make practical judgements about which HDI methods to employ or to require.

Alex Cole
Tin Ventures Ltd

Potrebno predznanje:

 


Powered by OpenConf®
Copyright ©2002-2021 Zakon Group LLC