Ethics of AI. Disclaimer: this text expresses the…

Ethics of AI. Disclaimer: this text expresses the…

Disclaimer: this text expresses the opinions of a student, researcher, and engineer who studies and works in the field of Artificial Intelligence in the Netherlands. I think the contents are not as nuanced as they could be, but the text is informed — in a way, it is just my opinion.

Allow me then to begin by iterating Wittgensteins’ de facto sentence with which he ends his first treaty in philosophy, Tractatus Logico-Philosophicus: “Whereof one cannot speak thereof one must remain silent”[7].

The problem with Ethics of AI, put succinctly, is the demand for morally-based changes to an empirical scientific field — the field of AI or Computer Science. These changes have been easily justified in AI due to its engineering counterpart — one of the fastest growing and most productive technological fields at the moment whose range of possible reforms threatens every social dimension. Most of these changes, for better and for worst, have been demanded by the political class and for the most part only in the West. The aim of this article is not to take any part in the political discussion, although this might be impossible by definition — after all, everything is political. It is still important to attempt to disentangle the views expressed here-in from those barked in the political sphere. The very root of the problem is linked to the over-politicization, indeed, perhaps even radicalization of systems that are not political by nature, like Science. The problem, that a scientific field has been mixed-up with its applications in industry — is a prominent one. Leaving their personal faith outside of the lab has been the common practice amongst modern scientists, but their hands are tied when it comes to the law.

Weaponizing AI is far too common, but usually, it happens just like in any other scientific field: after the research has yielded its fruits. This is a very human thing. And by that I mean it’s contradictory as heck. Remember the atomic bomb? — the application of the research could have been (and was/is) applied to energy production — a sum positive outcome. But people — namely politicians and the military — decided to use it during a war to kill other people instead. A hammer can be used to build a house, but it can also be used to break it. This is not politics or ethics necessarily, it is simply a fact: the politics and ethics happen around the fact inside people’s heads while discussing how to use the hammer. Ethics of AI as fuelled by modern politics is weaponizing AI even before it got the chance to leave the lab, rendering it useless. In other words, although plenty of reasonable ethical concerns are constantly raised regarding the engineering side of the field of AI and its applications in society, the problems come when a select few — and non-experts for that matter — start messing around with the scientific process, its definitions, the statistics, and goals for the sake of doing what is considered by them, “right”. Doing what is “right” might just destroy the most productive field as of yet that has come out of the methodology of thinking that we call “Science”. The core of the problem is that Ethics is no longer just concerned with the end goal of AI applications, but it is now concerned with the way AI research is conducted in the lab, its methodologies, and theories. This results in the death of science, where instead of searching for what is true, one starts claiming what should be true. At the root of this problem is new-wave politics and their want for power.

Ethicists are not ipso facto AI scientists. But AI scientists are well informed on ethics, particularly utilitarianism and consequentialism. This is often the case across the sciences or STEM, where students are taught explicit philosophical theories. Unfortunately, the humanities are seldomly learned about its counterpart — the sciences. Philosophers are mostly not taught scientific techniques. This is not strange because the nature of science in the 21st century is extremely technical and non-trivial. The two fields should not be confused then — but academia is attempting to change this, quickly and badly. Not just in AI, but in STEM. The methods by which the Humanities and the hard Sciences, and indeed, Ethicists and AI researchers, arrive at their conclusions are astronomically different. While AI, an empirical science subject to the scientific method attempts to explain natural phenomena through experimentation, hypothesis testing, and modeling on the basis of predictive outcomes; Ethics, like most philosophical inquiry, happens in the head until it gets forcefully applied through law. Scientists discover laws, and ethicists and philosophers pedal ideas that lead politicians to invent new laws. And regardless of logical consistency, the claims of philosophers are not self-evident nor true by any measure of that word. “Truth” is taken to mean the idea of approximating the empirically observable reality, whatever that may be. And if you happen to not have read any Ph.D. thesis on Moral philosophy as of late (I don’t blame you) then let me ellucidate you: no student of Philosophy is confused about the fact that they are not investigating empirical reality. Most statements made in the context of philosophy and ethics, if not all, remain self-evidently unproven and even unrelated to objective reality. Existing only in a post-modern echo chamber where statements are followed by non sequitur after non sequitur poised in an imperative fashion. In other words, philosophy, as it stands, does not prove or disprove anything, albeit its interesting hypotheticals, thought experiments, and fortunate historical circumstance of having been the word we call the people who led the way in inventing the early process of attempting to reach empirical truths. Philosophers are not Scientists.

The conjectures of Philosophical inquiry that have been verified to be the case saw their work done by other fields — which makes their first spokespeople arbitrates of nothing else but luck or good guesses. Scientific fields like AI require substantiating concepts like reproducibility (i.e. an experiment that is able to be repeated in the same way), replicability (i.e. a method that produces the same results every time it is reproduced), evidence and data, experiment and the capacity to output general laws that systematically yield predictive results above that of chance (when checked against nature through well-designed experiments). While scientists will have studied philosophy as part of their curriculum, explicitly or implicitly, philosophers or humanities students will often find themselves in a Ph.D. position without ever really having set their eyes on the underlying theory necessary to understand the qualifications for truth in the hard sciences. The prerequisite mathematical formalization, modeling techniques, and even the notion of experiments would have them running away, quickly. In this race — the race for truths — Physics is the furthest ahead. This is not by chance or someone’s whim and rule, but by design and fact of history. It is the most human-devoid subject of them all, requiring immense capacities for mathematical manipulation, thousands of years of cumulative work, and astronomic technological advancements to make even the slightest attempt at getting hard experimental evidence for any hypothesis at the moment. All of this is on the shoulders of teams often not shy from hundreds of scientists, working together from all across the globe in an awesome effort that collapses cultural differences in the name of the pursuit of truth. It is also perhaps the oldest science or subject of inquiry known to our species if one is lenient with definitions — and one tends to be nowadays. If we recall, even in ancient Greece some philosophers did prefer the term “natural scientist” no doubt because that was the goal of their effort: to understand nature. Some of the world’s brightest minds studied physics and by definition, the universe. This makes it a very good subject for comparing one’s own path during deep self-reflection, sitting in one’s armchair, potentially rocking back and forth. Philosophers are not Scientists.

Historically, Ethics is an autocratic effort to claim ownership over what is considered to be good or bad. Its conception is a Greek fruit of genuine inquiry marked with rotten deontological stains in the form of the virtues. For the larger part of its life, it was procured and owned solely by religion. While it doesn’t offer real solutions to problems, it limits the set of possible decisions regarding the solution of a problem, deeming some good, or better and others worst, or bad. More often than not, these decisions are based on arbitrary cultural inheritance. For anyone who does not subscribe to the prescribed ideology, the limits or rules of the ethics in question will seem arbitrary. The ethics of a people often emerge naturally as their status quo and is often mapped to the law of a country or region — it’s what most people within a group agree is good or bad up to a certain point in time. In philosophy, it used to be the discussion of what rules should guide action and why — the comparison of these, the arguing for and against — the meta-analysis. But in modernity, it has been mutated by the recent ideological movement that swept through the Humanities. Strange unscientific disciplines emerged within the social sciences. The students of which became professional academic dictators after being subjected to 3 years of radical anti-Western narrative. As per usual, ~70\% of the student body ended up outside of the academy, either as professional protesters or writers, for example. But most found that their vocation seamlessly coincided with HR positions where they can be the arbiters of rules. A lot of these people were hired to work at universities where they introduced dozens of sensitivity training workshops, twin bureaucratic positions, and tons of programs with the aim of equity or equality of outcome without any regard for statistical fact, geographical access to people, or any other valid reason that might explain the current distribution of people and their local decisions. But we’re focusing on a particular implementation or technology: a spectrum of courses of the type “Ethics of X” where X is any field, the aim of which is not to argue, discuss, or even to study the history of Ethics or the ethics of different people but rather the conversion of students to a certain ideology.

The technical difference between sciences and non-scientific fields (like Ethics) can be demonstrated with a thought experiment which will hopefully also elucidate better why their involvements can be detrimental: would the reader prefer to fly an airplane built by an expert in Aeronautical Engineering or an Aeronautical Ethicist? — Be alleviated to know that the latter wouldn’t pose much risk since it would never lift from the ground… But what about an airplane that was built by an Aeronautical Engineering expert under the supervision of an Ethicist that deemed landing wheels wrong because of their inherent immorality? This anecdote, although bad in taste and perhaps everything else comedy-wise, really should show how ridiculous it is that certain positions are given to ethicists. Not only are wheels being deemed wrong — but the aim of the airplane is being corrected by this Ethicist! Universities are giving their keys to the gates of “Truth” to these agents of relativism. And they have begun to do what they do best: dictating what they want to be the case without any respect for what is the case. All of this is happening under the nonsensical accusation that AI scientists (or any non-experts in the field of Ethics) are not capable of distinguishing right from wrong, while Ethicists (non-experts in AI and everything else) are capable of dictating how any science — anything — should be done.

When new non-expert definitions take priority over the previous technically sound theory, solutions to real-world problems lose priority to ideological movements or power grabs by intellectuals whose research is not on the line: they don’t publish in the field, they just tell the field what to publish. People with power but no expertise decide on what is good without using the fields’ scientific method and then this castrates solutions that actually practically benefit people or experimental evidence that points towards some natural phenomenon. It substitutes the aim of reaching some real objective good, or approximating what is the case — that which requires empiricism, experiment, and data — by some ideological, political, and strategic calculation. In the case of AI, this is non-sensical. Ideology pedlars are now attempting to reframe AI research so that their views of the world become the case, weaponizing AI technologies to carry out their will. AI is a (if not THE) science that theoretically studies knowledge, intelligence and mathematically formalizes how to make informed decisions and pursue them. It does exactly what these new wave HR-ethicists attempt to do, but does it, really, through designing and implementing computational systems, carrying out hypothesis testing, and applying mathematical/statistical analysis and optimization techniques (the beginnings of a scientific epistemology).

This attempt of Ethics at absorbing AI can then be seen as an impulse propelled by a rudimentary survival instinct: a sort of academic parasitism. This might be part of a larger problem that has been looming over the STEM fields and has infected academia — the humanities are, to no fault of their own, endangered by technology. In our century, the fields of classical humanities have been mutating into a mashup of cargo-cult science, where these observe the STEM fields in an attempt to mimic their scientific methodologies which rely heavily on computational methods in order to achieve some productivity and stay afloat in the new market of knowledge and skills. This might very well be because of the increase in scientific productivity, societal dependence on technology, an overabundance of jobs, and increased salaries in STEM. Which is starving the humanities of pupils. Their evolutionary strife results in a soup that is fitting of the absurd since these in-between fields show non of the scientific integrity of the hard sciences nor do they hold the same standards of truth. While at the same time, their students forget what made the humanities special to begin with: the study of what it means to be human. A problem that has always reflected the necessary contradictions and absurdity involved in existing as a person in this world — the type of stuff that one leaves at the door as a scientist. This raping of classical studies comes then from the need to be taken seriously by a set of fields that has long since lost their grip over definitions of truth and has willingly replaced them with relativism. Indeed, some even deny that there is such a thing as empirical reality. But how can STEM fields — or any field for that matter — take relativism seriously when it is bathed in contradictions and full of unproductive theories (theories that fail to predict anything). For example, the parents of post-modernism and the root of applied relativism (again, folks like Foucault, Derrida, and Deleuze) all claimed in one way or another — and without ever studying a science — that science could not explain reality. They claimed this only to replace science with their own personal theories and versions of reality — theories which had no predictive value. This is not a claim, it is a matter of fact: this is what they did literally. These philosophers — also known as the deconstructionists — attempted to deconstruct reality and failed miserably. The difference between their diverse accounts of reality, the incapacity to provide any meaningful prediction or concrete falsifiable theories proves this. The armchair philosophers all provided very intimately, personalized views of reality that now live on as memes that are placed in students’ heads, who then think themselves free to pick the one that best resonates with their feelings about the real world. All of this without ever mentioning statistical laws, probability theory, the value — no, the necessity of experiment — or any formal account of these theories.

Philosophy has a special place among the in-between sciences. As it stands today, unproductive and completely abstract, it has had the uncomfortable tendency to claim ownership over ideas, work, and even complete fields as proof of its productivity, perhaps due to the pressure and demand of the modern world. Philosophy was the mother of ‘science’, not of modern science, but of the ideas that birthed the natural sciences and the methodology of thinking rationally as well as looking out there, into nature and trying to account for its patterns so as to predict phenomena. So it’s only natural that it would feel that it deserves some recognition. And it does: who can remember a single popular science book that does not begin with some praise or even adoration for philosophers? Especially the ones that walked on ancient Greek marble floors. But those floors are gone — replaced with pavements of black tar.

Take the more prolific and prevailing approach in AI to making decisions: that of acting rationally. The rational agent approach sees an agent as something that acts autonomously, perceives its environment, persists over a time period, adapts to change, and creates and pursues goals. A rational agent is also one that acts so as to achieve, under uncertainty, some best-expected outcome. Making correct inferences is important in this context since deducing that a given action is best enables an agent to act on that conclusion. But there are ways of acting rationally that cannot be said to involve inference. For example, reflexive actions like moving away from an upcoming vehicle. Finally, the standard of rationality is mathematically well defined and completely general. If one wants to build a robot — let’s call it Johnny — that achieves a certain goal autonomously that is to be delineated by an engineer — an Ethicist might come in handy when discussing the goal in the context of the alignment problem and the overall state of society as it might be impacted by Johnny. But then again, wouldn’t a lawyer know more when it comes to particular applications of Johnny? Or an anthropologist — wouldn’t they offer a better-contextualized representation of the society in question? Of their morals and culture? The problem aggravates: while the engineer is vividly aware of the technical complexity, the Ethicist is free to roam a landscape of virtues and utility calculations that are not grounded on reality or ignore the intricacies of the problem.

One can shape an infinite amount of sets of arbitrary virtues, goal states, and preferable ways of doing things over others. If we dispose of empiricism then we cascade into a Wittgensteinian nightmare where anything can be said and done — the world of unicorns in our heads, where symbols (like words) do not symbolize objects in reality. Without needing proof — at least proof that is demonstrably true and falsifiable — how can we accept the changes that are demanded? The shared physical space of reality has been lost and traded for the mind and its stories. Sophistry takes the stage. Independently of opinions of the mind and outside of the mind, out there, a testable hypothesis that can be arrived at by anyone at any moment using experiments and data exists. This is why buildings that are built without a good understanding of physics, fall. This is why bad practices in medicine during the previous century probably killed more people than doctors saved: the bacteria were there regardless of anyone’s opinion. The problem is that the mind exists in reality but it is contained in a safe space: the space of computation. And it only interfaces with reality through flawed systems that are prone to error and that offer too many different interpretations to be considered optimal, like human language (e.g. this essay) or qualia: what it feels for someone to be given a certain sense-input. More importantly, it isn’t bounded by rules of nature that can be observed and measured: there are no 1-to-1 mappings between the mind and the real world. Although a brain is a physical object, its computations are not. Anyone can devise a program where “1+1” does not equal 2, imagine an animal that does not exist (e.g. unicorns or dragons), write a fantasy story, or program a game where the physical laws are completely different. Without a shared language like science, which offers an indirect way of testing hypotheses and showing how they may be wrong — how you and I might be wrong — everyone is free to make up reality as they wish. But reality will still be there regardless if we choose to look at it or not, and people will suffer the consequences regardless of our fantasies. As Feynman said: “The truth will always out”.

The deployment of ethics in its current form then is not only unscientific, but it also castrates AI of its empirical basis and renders the field useless — it renders any scientific field useless. Ethics is important as a framework where normative assertions can be made regarding goals, not regarding the way to achieve them. This discussion has its place in the space where politics and law exist, not in the lab. Take the relationship between the church and the state. This is an analogous relationship to that of AI and Ethics as they currently are taught in universities. Why was the former separated? Succinctly, this was done because people wanted freedom. Fostered by certain Enlightenment philosophers like John Locke and Thomas Jefferson in their search for the secular state, the debate became widespread enough that a crack formed in the European model of reality. They searched not only for religious freedom and the freedom to not be religious but also for the freedom to foster views and ideas that detached themselves from the world of story and religion (i.e. faith-based and religious scripture-supported views of the world). A race towards truth — all sorts of truth — was then initiated: different political systems were proposed, and different flavors of religion as well. And the empirical view was rekindled too. From philosophers like Plato and in a Socratic fashion, the philosophers, religious leaders, politicians, mathematicians, inventors, writers, and polyglots of Europe all attempted to achieve a new model of reality. Some of them, like Descartes, Newton, Kant and Hume thought of a world based on causality, logic and reason, mathematics and proof (i.e. secular or scientific view of an empirically observable world instead of a faith-based view). And when this view or methodology yielded the most productive series of inventions, predictive theories, and overall progress in human life, it conquered and fuelled almost every aspect of society and it spread through the world. Ethics applied to AI marks a regression from these efforts within the field of AI. In fact, its inception in the form of “{Ethics of X}”, where X is any scientific field is dauntingly similar to the religious state poisoning our methods of inquiry, where inquisition, censorship, and opinion-based changes to the field (regardless of how mild) destroy any science. In the case of AI even more so, where a technically grounded science that relies heavily on complex computational methods that are sensitive to mathematical manipulation yields to the ethics of the ignorant: the scriptures and descriptions of some ideal state that only exists in the head of an arbitrary philosopher elite. If the two sound disconnected that’s because they are and they should be. Machine Learning cannot yield productive models if an Ethicist already decided what the result should be. AI cannot be used to optimally find good predictors for some models if an Ethicist consistently determines these to be bad or wrong based on their opinion or political affiliation. Science cannot exist in a space where people construct hypotheses backward. The model needs to be biased — because the biases are the patterns and if we ignore the patterns then we do not know anything at all about anything whatsoever.

Religion has the necessity to make claims regarding the real world. In fact, it requires the power to do this, so as to support its normative rules against the wider populous and other religions. It also necessitates the capacity to do so without the burden of proof when evidence contradicts its claims. This might very well be a prerequisite for governing powers and their source of authority. There is then a competition for truth between governing bodies and science or any other sense-making mechanism because of diverging claims about reality. One difference between these is the methodology behind the “search for truth” algorithm. Any theory that does not take into account natural phenomena or does not approximate a predictive description of it is empirically arbitrary. Ethicists’ “goodness functions” are then arbitrary, but historically consistent — because they are dependent on the history and geography of a place and its people. Religious entities largely defined what the modern Ethicist has to say about good and evil. This can be noted in the fact that a Middle Eastern Ethicist will disagree with a central European Ethicist in a significant way, the latter will disagree with a North American one, and all of them will disagree with a Central Asian one, in a way that can be predicted by their historical versions of religion. The shared “empirical” world is very reduced for Ethicists and religious entities, much more so than for scientists — since the latter is always bounded by the same object: reality. Because Ethicists attempt to define good and evil and they are largely influenced by the historical background of religious thinking, allowing them to exist in the AI engineering pipeline compromises the scientific effort in the field of AI. In the place of science, we’re left with opinions causally chained to religious authority. In the end and in the extreme, we find ourselves with the too well-known problem of totalitarianism and the tyrant. A living example of this is China. The superpower outputs their students in mass to the USA and the EU mainly because their world view, state, and government’s health requires the use of heavy censorship (e.g. Chinese internet[6]) so as to not see their views opposed or undermined by scientific efforts, reason, skepticism or just critical thinking and divergent opinions. In this space, education and science can’t exist. Totalitarian countries like China, or very religious environments like Galileo Galilei’s XVII. century Europe do not allow for empiricism. In the name of some arbitrary good, atrocities are committed. Undermining AI as a scientific effort, albeit being quite different from China’s grip over their people and religion’s previous terrible atrocities, should not go unnoticed, less we give up the study of knowledge and intelligence altogether. The usefulness of ethics when deployed in its original form: of consistent rational inquiry and discussion about what might be good or bad for society by-in-large, will always have a vital place in people’s education. Perhaps more important now than ever, we need to remember that the world is complex and nuanced and our best tool yet is to explicitly deploy empiricism through valid science (i.e. reproducible and replicable techniques that use falsifiable hypotheses). Finally, let us not forget that despite the attempts at silencing Galileo’s methodology and reasoning, we know the Earth orbits the Sun, not the other way around. Let us not forget, that the people who chained Galileo to his tower until the day he died choose not to look through his telescope. Galileo dared to open his eyes. Let’s not forget then to check it ourselves before we decide if it is right or wrong, less we lock ourselves up in that tower, willingly chucking the key out of the window and dying without ever knowing that this Earth goes around the Sun.

This is a fact: Google’s board of Ethics was dismantled soon after it had been assembled.

In “How the machine ‘thinks’: Understanding opacity in machine learning algorithms”[3], Jenna Burrell tried to show just how complex neural networks are. She described them as computational objects where input nodes connect to other nodes called the hidden layer(s) while these connect to an output layer. Weights are values that are stored in these nodes — a matrix of these values maps the input to the expected output and these values are what the network learns. An optimal matrix exists — that which produces the most accurate classification of input. In the case of a handwriting digits classification task, for example, this can be easily observed since the network weights represent pixel intensity values. This is how the author describes how a machine ‘thinks’ — or at least a famous algorithm. Different Ethicists nitpick different problems and decide that the optimal matrix — the solution to those problems — is controversial. The author then goes on to say that automatic spam filter neural networks can be discriminatory, namely to Nigerian people. Just like in the previous example, where the network would value certain pixels in classifying an “O” or an “I”, namely the ones that form a curve versus the ones that form a straight line, now the algorithm classifies words as being more relevant or not in classifying an email as spam or not spam. If the current ethical movement would have its way, this simple algorithm would have a correction that “fixes” the “wrong” result. The problem here is that, as a matter of fact, the network is not biased or discriminatory, not in the human sense (taken to be negative by ethicists), but it is biased and discriminatory mathematically speaking — in favor of the pattern (positive sense). It is not biased in the negative sense least because it is not capable of being any of these things since it is not a human (can a toaster be accused of murder if it happens to fall in the bathtub?). It has actually done an amazing thing by automatically finding a pattern and making a classification based on its training dataset — it is empirical to that extent and in so far as the data was properly collected: which is often the case. So what if an extraordinary amount of spam emails and internet scam attempts do mention the word “Nigeria” — they could very well mention “Morocco”, “USA”, “Apples” or “Cross-eyed Danielle”. If this word is in a lot of spam emails then it is a feature not to be ignored, a pattern of value for predicting the class of the email. If you think that because the word in question refers to a country that so happens to be in Africa where most of the population just so happen to be black of skin color, then the algorithm has to be wrong — then you are the one discriminating and in need of sensitivity training, as well as scientific training and probably critical-thinking training. The Ethical correction proposed by Jenna would remove one of the most relevant features of email spam filtering. Anyone who proposes such a ‘fix’ is creating a catch-22 problem by naively denying the solution to a real problem: people, mostly the elderly, being scammed by emails. Isn’t this discriminatory towards the elderly? That’s a very large class of vulnerable people to be discriminating against. The correction destroys the usefulness of the method. The fact is that the word “Nigeria” is a good predictor of spam emails independently of ethical considerations, no matter how much this bothers a Western Ethicist. And notice the workflow: someone determines that the use of the word “Nigerian” or “Nigeria” is controversial and discriminatory without argument — this is not self-evident. There is nothing about the clever way in which the algorithm learns patterns that would indicate that there is a discriminatory or biased thought pattern beforehand — I reiterate: the machine is not a person and Daniella would not dare argue the contrary. It is not capable of discriminating against any individual in a social context — it is thinking as much as the stones on the floor think of getting hot when the sun attacks them. The machine is measuring and records patterns. The humans behind the machine are the ones doing the thinking for it, and I wouldn’t dare burden those with the typical accusation of being “racist”, or having “discriminatory intent” or other internal innuendos that one has no proof of or access to when the evidence is in the data, to begin with. This might be just one example, but it can be generalized to illustrate how ML applications are being reframed into an ideological frame propelled by political innuendos. The general form is the following: a variable is selected by the algorithm in the data as a valuable feature (positively biasing the classification). If this variable’s weight in the classification or decision is the opposite of what the present political view says it should be it is then considered “bad” and viewed as a negative bias instead so that it can be removed from the current list of facts or ignored. This would not be so awful — after all, there are more features that escape the censor’s eye — if it weren’t for the next move: to change the algorithm in an attempt to bring some “preferred” reality into fruition. To change data and classifiers so that a different reality is shown instead of the human-unbiased one is another way in which Science dies. No matter how noble and righteous the cause, lying about one’s results, forcing results, or changing the algorithms to obtain the results one wants to see is unscientific, fraudulent, unethical, and wrong. Because then we will be lost in the dark, subjected to a select few’s vision of what the world should be. And the trope, arm raised high comes marching in, the cliche black boot and its devilish squared mustache dictate what else should be true.

The argument against technology is often one that entails some incredible power for predicting outcomes never seen before, not even in the technology itself: that the use of this particular word in this algorithm that identifies certain people will be devastating for that abstract conglomerate of people that we fancy calling “a minority” even if they’re a whole country. This is not self-evident and the burden of proof should fall on the accuser, not the accused. Especially when the accused technology results in good outcomes — i.e. predicting above random chance. Especially when it comes down to algorithms that can save real people from unfortunate circumstances.

Ethics at the moment is then oppressive double-speak, contradictory double standards, non-sequiturs, appeals to fallacy, appeal to feelings, and the justification of actions on the basis of personal experiences and unjustified assumptions instead of evidence or experiment. In sum, it is an intellectual sinkhole being peddled out by academics that are the philosophical offspring of people like Michelle Foucault, Derrida, and Deleuze whose intent is to revolution towards their personal perfect world. How a strict anti-discord policy, where the status-quo governs research ever got into academia out of all places is beyond me, but I guess truth does live and die at the university. I conclude then that Ethics is not Scientific.

[1] David Abel, J. MacGlashan, and M. Littman. “Reinforcement Learning as a Framework for Ethical Decision Making”. In: AAAI Workshop: AI, Ethics, and Society. 2016.

[3] Jenna Burrell. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms”. In: Big Data & Society 3.1 (2016).

[5] Walter Sinnott-Armstrong. “Consequentialism”. In: The Stanford Encyclopedia of Philosophy. Ed. by Edward N. Zalta. Summer 2019. Metaphysics Research Lab, Stanford University, 2019.

[6] Shiyang Wei. “A Pilot Study on the Chinese Internet Environment”. In: Education and Management. Ed. by Mark Zhou. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 617–621. ISBN: 978–3–642–230653.

Images Powered by Shutterstock