AI and Big Data

Data Ethics: data science must not be in the sole hands of experts

A great number of universities of the world propose courses about data science, machine learning and artificial intelligence. Isn’t it time they added data ethics on the agenda, at a time when tech giants like Facebook and Google — to name a few — have become familiar targets for the lack of respect for their users’ data? Or at least for their inability to prevent such data from being stolen by external players like the ill-famed Cambridge Analytica. This contribution will provide the framework of my presentation on behalf of the Business Analytics Institute to the CDEFI conference on “Ethics and digital [ethique et numérique]” June 6th in Toulouse.

We all have our crosses to bear, but Mark is carrying an even bigger one: that of our data stolen by Cambridge Analytica — amongst others. [Collage by an anonymous artist – Paris January 2019]

How important is data ethics…

What issues need to be addressed, which themes should be explored, and how can the subject be taught effectively?

Data ethics involves the study and adoption of data practices, algorithms, and applications that respect fundamental individual rights and societal values. The primacy of data in modern economies becomes more apparent each day. Success not only in science but in business and society depends on understanding both what data exist and what it represents. It is of little wonder that universities the world propose specializations today in data science, machine learning, and artificial intelligence. Yet confining data science to the realm of specialists is both short-sighted and potentially perilous, for both public and private organizations are increasingly relying on analytics to monitor and evaluate almost every aspect of our daily lives.
Is data ethics limited to concerns about e-mail scams, the abusive use of micro-targeting, and the immorality of troll farms?

Cambridge academics have monetized their research of psychometric data to predict and influence behavioural preferences. Facebook has deliberately modified the sentiment of seven hundred thousand of its users “home feeds” without their consent. Amazon has continued to aggressively market its facial-recognition tool Rekognition in spite of concerns over privacy and bias. Courts are using algorithms to profile convicts according to “risk” based on skin colour at each stage of the legal process. Employers are recruiting using algorithms that inherently favour certain socioeconomic groups. Applications of data science cannot be dismissed as simply “business as usual”, for they produce ethical consequences that condition the future both business and society.

What types of problems are we trying to solve

 in applying data science to automate processes, interpret sensory data, master conceptual relationships or influence environmental dynamics? Artificial intelligence (AI) can be distinguished from machine learning (ML) by comparing its objectives, methods, and applications. By its very nature, machine learning has been historically focused on producing new knowledge, whereas AI aims to replace human intelligence. Machine learning uses algorithms to improve supervised, unsupervised or reinforced learning, AI leverages algorithms to replicate human behaviour. Data Scientists deploy machine learning to better understand patterns in the data, they hope that AI will provide the answer to complex problems. If the objective of ML is to improve our ability to make better decisions, that of AI is to provide the optimal solution. The ethical implications of data science depend upon each organization’s objectives, data practices, and applications.

Is data ethics a bigger subject that artificial intelligence itself? 

If the scope of artificial intelligence is difficult to gauge, its societal impact extends far beyond trying to “do something useful with the growing morass of data” at our disposal. Are data just technology, can AI be reduced to a form of experimental logic designed cure modern life’s ills? Because Artificial intelligence reflects the visions, biases, and logic of human decision making, we need to consider to what extent AI can be isolated from the larger economic and social challenges it has been designed to address. Emerging issues such as personal privacy, public engagement with data, the pertinent metrics for evaluating human progress, and the relationship between data and governance suggest that data condition how we see and evaluate the world around us. If data are of little value until they are used to insight decisive action, data ethics needs to focus less on data and on algorithms that their impact on the bounded rationality that defines human decision-making. In sum, as the proponents of open science suggest, there isn’t a binary opposition between data and action, only interactions between interventions and contexts.

Data Ethics
We are at the crossroads. Either we go the data ethics way or we choose the other path which leads us to more data thefts and less respect for users. This issue is vital for the digital world.

Which subjects need to be addressed in a curriculum on Data Ethics?

As the initiatives in Europe, Brazil, India, Singapore, and California illustrate, the issues surrounding personally identifiable information, explicit consent, as well as the rights to access, to rectify and to be forgotten all need to be explored. Implicit bias should also be high up on the list and explore how attitudes and preconceptions influence our understanding of data, cognition, logic, and ethics. The managerial issues around digital transformation can be analyzed: including the extent to which managers and organizations need to appropriate and be held responsible for their data practices. Technology’s impact on reasoning should also be discussed, for our reliance on data has subtly modified the traditional definitions of “freedom of choice”, “privacy”, “truthfulness” and “trust”. Finally, the compatibility between AI and innovation can be examined: our reliance on scientism belittles other forms of human intelligence including emotional (interpersonal), linguistic (word smart), intrapersonal (self-knowledge) and spiritual (existential).

Finally, how and where should Data Ethics be taught?

As a baseline, Rob Reich suggests that all those who are trained to become technologists should have an ethical and social framework for thinking about the implications of their work. Yet, as the parliamentary hearings on AI in France and Germany demonstrate, students preparing for careers in public policy and other fields would well profit from having a better understanding of the societal impacts of data science. Rather than proposing a checklist of “rights and wrongs” modules on data ethics would be well inspired to on the ethical consequences of data-driven problem-solving. In the absence of a universal list of “rights and wrongs”, Shannon Vallor argues that students need to develop “practical wisdom” to navigate ethical challenges posed by successive generations of technology.[xii] If Data ethics may never be fully captured in one course, it can be better explored in a framework applied to academic study and research as a whole.

The Business Analytics Institute offers “Data Ethics” as both a turn-key module and a framework. For more information, please contact ethics @ baieurope.com

Lee Schlenker

Prof. Lee SCHLENKER teaches in the fields of Business Analytics and Community Management and is a Senior Consultant at the Business Analytics Institute.
Back to top button