The tricky ethics of Google’s Project Nightingale, an effort to learn from millions of health records

Sharing electronic medical records broadly could identify trends as well as mistakes, but it also poses privacy concerns. Metamorworks/

The nation’s second-largest health system, Ascension, has agreed to allow the software behemoth Google access to tens of millions of patient records. The partnership, called Project Nightingale, aims to improve how information is used for patient care. Specifically, Ascension and Google are trying to build tools, including artificial intelligence and machine learning, “to make health records more useful, more accessible and more searchable” for doctors.

Ascension did not announce the partnership: The Wall Street Journal first reported it.

Patients and doctors have raised privacy concerns about the plan. Lack of notice to doctors and consent from patients are the primary concerns.

As a public health lawyer, I study the legal and ethical basis for using data to promote public health. Information can be used to identify health threats, understand how diseases spread and decide how to spend resources. But it’s more complicated than that.

The law deals with what can be done with data; this piece focuses on ethics, which asks what should be done.

Beyond Hippocrates

Big-data projects like this one should always be ethically scrutinized. However, data ethics debates are often narrowly focused on consent issues.

In fact, ethical determinations require balancing different, and sometimes competing, ethical principles. Sometimes it might be ethical to collect and use highly sensitive information without getting an individual’s consent.

Public health ethics are useful to evaluate activities that affect population health. A recent report by the World Health Organization (WHO) describes public health ethics with four principles:

Common Good – Does the activity promote collective benefit?
Equity – Does the activity reduce the burdens or risks to health or opportunity?
Respect for Persons – Does the activity support individual rights and interests?
Good Governance – Does the activity have processes for public transparency and accountability?

Public health ethics is an appropriate framework for evaluating Project Nightingale, given its massive scale. But the current health care context is relevant.

The system and its struggles

A doctor at a patient’s bedside could benefit from knowing how other patients in similar situations have fared. That knowledge could be enhanced by sharing data contained in medical records.
Monkey Business Images/

For over a decade, scholars have argued that technological solutions are needed to address three major challenges to how the health system uses information.

First, the health system struggles to integrate new knowledge into patient care. New medical evidence takes 17 years to change clinical practice, on average. The breakneck pace of science challenges doctors to keep up. And, applying modern medical knowledge requires doctors to consider more factors than is humanly possible.

Second, information is central to preventing many medical errors, the third leading cause of death in America. Communication problems, judgment errors and incorrect diagnosis or treatment decisions can have devastating consequences for patients.

Third, the system does not learn from care. For example, a doctor and patient might try several different medications before finding the right one. One medication might not help, another might cause awful side effects, and finding the best medication might take months or years. The health system does not learn from that care process. Individual providers will gain knowledge over a lifetime, but that knowledge is never aggregated or shared efficiently.

To help address these challenges, the Institute of Medicine in 2007 introduced a vision for a learning health system that would quickly learn from patient care and use that knowledge to improve future care.

The concept is simple, but learning health systems require sophisticated information technology platforms capable of extracting knowledge from the existing evidence and millions of treatment records.

The benefits of Project Nightingale

Project Nightingale appears to align with the learning health system concept.
Systematically improving health care is a clear common good.

Although a learning health system requires sharing patient data, patients stand to benefit from improved health care. Reciprocal data sharing by patients for a collective benefit is a prototypical example of the “common good” principle in public health ethics.

Project Nightingale might also improve health equity. For example, minorities and pregnant women are underrepresented in research studies, raising concerns that some medical knowledge might not be well tailored to these patients. A learning health system would improve understanding of what treatments are effective and safe for these underrepresented populations.

For small-scale activities, respect for persons usually demands giving people an opportunity to make a free and informed decision to participate. However, for activities carried out at the scale of the whole population, it is possible to show respect for persons by engaging the public and inviting them into the decision-making process. It is not clear whether Ascension or Google involved the public or patients in Project Nightingale.

The downsides

Some patients have criticized Project Nightingale because it does not have an “opt-out” for patients who do not want their information shared.

However, opt-out systems raise ethical concerns, too. They permit free riders who will benefit from the knowledge gained from the participants. Second, knowledge from a learning health system could be biased if enough people opt out. If so, opting out could expose others to riskier health care.

Good governance is critical to support a “common good” activity that conflicts with some individual interests. Transparency and accountability are crucial to keep the parties honest and open to public scrutiny. They also empower people to demand government action against an activity that cannot be ethically justified. There is little, if any, reported evidence that Project Nightingale has sufficient transparency or accountability processes. This is likely to be the biggest ethical challenge to Project Nightingale.

Issues of consent

In most circumstances, patients must sign consent forms before their private medical information can be shared.

Some of the biggest concerns have been about consent. However, public health ethics do not always require consent. One recent WHO ethical guideline says:

“Individuals have an obligation to contribute … when reliable, valid, complete data sets are required and relevant protection is in place. Under these circumstances, informed consent is not ethically required.”

The basic argument is that individuals have a moral obligation to contribute when there is low individual risk and high population benefit.

Currently, the public does not know enough about Project Nightingale to make definitive ethical judgments. However, public health ethics likely provides some support for what Google and Ascension are trying to do. The more critical ethical issue might turn on how Google and Ascension are doing it.

[ Expertise in your inbox. Sign up for The Conversation’s newsletter and get a digest of academic takes on today’s news, every day. ]

The Conversation

Cason Schmit does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Source: The tricky ethics of Google’s Project Nightingale, an effort to learn from millions of health records

Philosophy of Perception and Biology

by Aaron Spink

The short video presents some interesting facts about the mantis shrimp in a comical way and it emphasizes how unusually the shrimp’s eyes are when compared to other animals. This video can be implemented discussing philosophy of perception, philosophy of biology, and empiricism.

Find the video here:

What relationship do our bodies have with our conscious experience and can we even imagine sensing something entirely new? Thomas Nagel asked us what would it be like to be a bat? But imagining being a bat would be easy when compared to imagining it would be like to be a mantis shrimp. Students are often convinced that the reason why we can only view three primary colors, and why they cannot imagine any more than that is because there cannot be any more than three. However, the mantis shrimp can not only punch faster than a speeding bullet but it can also see at least 12 different wavelengths of light—or can it? How can we know?

This clip can be a good introduction to reductive materialism or empiricism. For example, does the mantis shrimp’s number of photoreceptor cells give us any information about how that creature experiences the world? Can we figure out what it is like to be a mantis shrimp from studying its eye?

On the other hand, I often use this clip to introduce empiricist philosophers. This clip would work well for challenging students to think of something they have not yet perceived (i.e. to justify Hume’s copy principle) or to motivate the problem of other minds.

Possible Readings:

  • Nagel, Thomas. “What is it like to be a bat?.” The Philosophical Review 83.4 (1974): 435-450.
  • Hume, David. A Treatise of Human Nature.
  • Berkeley, George. An Essay Toward a New Theory of Vision.

Aaron Spink is currently a senior lecturer at the Ohio State University and works primarily on early modern philosophy.

This section of the Blog of APA is designed to share pedagogical approaches to using humorous video clips for teaching philosophy. Humor, when used appropriately, has empirically been shown to correlate with higher retention rates. If interested in contributing please email William A. B. Parkhurst at

Source: Philosophy of Perception and Biology

White Philosophers: It’s Time to Stop Using Digital Blackface

by Savannah Pearlman

The origins of blackface date from the early 19th century minstrel shows, in which white actors blackened their faces to take up racist, exaggerated and stereotypical black personas. White actors appropriated and distorted the identity performance of black individuals, portraying them as lazy, promiscuous, jolly, or idiotic—all for the enjoyment of (predominantly) white audiences.

While we now denounce blackface as an unfortunate pastime of American history, philosophers have yet to investigate the harms of its 21st century analog: digital blackface.

So, what is digital blackface? And why ought white philosophers, in particular, stop using it?

Like the minstrel shows of old, digital blackface involves a white person’s use of a black face (voice, attitude, or expression), usually a gif (a short, soundless looped video) to add a humorous emphasis to their own reactions.

According to Giphy, a large database of gifs, the most commonly shared gifs online are those of black expressions. Similar to its historical counterpart, digital blackface is meant to express a range of emotions: sass, disgust, surprise, happiness, confidence, dismissal, and confusion, among others.

Here are some paradigmatic examples of gifs, when posted by white people, may constitute digital blackface:

White people are not alone, of course, in posting gifs of black and brown people and their expressions. When people of color post these gifs there is also a risk of stereotype perpetuation from one group of marginalized peoples by another, colorism within the black community, and the inadvertent reinforcement of internalized oppression. Still, these are not the focus of this article. Rather, systemic inequality makes white use of these gifs the most egregious case, whose problematic nature is the clearest to explore.

The harms of digital blackface are two-pronged: first within the wrongness of the act itself, and second, within the consequences of posting for public view.

To the first point, the act of a white person co-opting the identity performance of black expression embodies a pointed disrespect. The white person uses the black body (voice, attitude, expression) as a means to an end – for comedic effect.

Just as one (if one is white) ought not don traditional blackface even when home alone, one (if one is white) ought not don digital blackface, even if one’s post is set to private. The act itself is to participate in a historical tradition of demeaning black people, made palatable for general audiences under the guise of entertainment. Mere participation in the act signals complicity at best and an explicit endorsement of racist tradition at worst.

To the second point, digital blackface perpetuates dangerous stereotypes.

The purpose of traditional blackface was to “tame” what whites perceived to be “black threat” (via comedy) by reducing their complexity to one-dimensional characters, whose motivations were easy to understand and even easier to control. The gifs provided as paradigmatic examples are mere snapshots compared to their full-length predecessors, but they participate in the same tactic of reduction. It is precisely their one-dimensional, exaggerated emotion that makes them a desirable extension of the poster’s own emotion.

“The sassy black lady,” one of the most common themes in digital blackface, reaffirms the trope of “the angry black lady.” This stereotype is often deployed against black women to undermine their credibility in situations where they have very real and justifiable anger. By normalizing these tropes as we view reaction gifs again and again, we desensitize ourselves to their damage and enable the conditions for epistemic injustice.

These viral gifs also reaffirm to black people that their likeness and character exist for white use. The white poster dons the black face when it is convenient and enjoyable to do so, without facing any of the personal or systemic injustices that come with living with a black body. Shafiqah Hudson—a critical race scholar—comments in an interview with the Guardian, “It’s superfun to ‘play black’ when you know that you can instantly step back into being non-black, avoiding the stigma, danger and burdens of reduced social capital that real black people often endure.”

This is not to say that every gif of a black face (by a white poster) requires the wrongful deployment of stereotypes, for which the poster is culpable. But there are obvious cases, such as the paradigmatic ones offered above, which make clear that white posters often participate in the stereotypical depictions of black people, with or without embodying racist intent.

While white users may not intend to deploy racist stereotypes in their quest for internet jocularity, failure to intend harm does not undercut the perpetuation of that harm. As Lauren Michele Jackson of Teen Vogue writes: “Digital blackface does not describe intent, but an act — the act of inhabiting a black persona. Employing digital technology to co-opt a perceived cache or black cool, too, involves playacting blackness in a minstrel-like tradition.”

As Rima Basu (2018) identifies, some racist beliefs are motivated by ill-will and some are not. She writes: “although it is tempting to think that the racist suffers from ill will or a deficiency of good will, racial injustice can survive and even thrive in the absence of such negative non-cognitive attitudes. Racism can come in cold varieties as well as hot varieties.”

Thus, given the nature of our field as one of critical inquiry, and given that ethics is one of the pillars of our subject, it seems that we ought to consider what (if anything) has gone wrong, and how we might mitigate the harms of our own digital presence.

I argue that white philosophers ought to defer to the black authors and academics on this issue. Ellen E Jones of the Guardian suggests that digital blackface is yet another outlet for misogynoir (a term coined by black feminist academic Moya Bailey). Lauren Michele Jackson, who authored the seminal article for Teen Vogue, argues that we ought to at least recognize the harm of racism’s proliferation into online space. Jackson is also a faculty member of Northwestern’s African American Studies department, and has written extensively on this topic. Black academics have drawn connections between digital blackface and emphasized how its harms are in some ways analogous to its traditional form. We have a duty to learn from our colleagues and continue this conversation in the philosophical literature.

Our online identity is partly self-created. We construct a persona based on the content we curate for our friends and followers. For many, this identity is an idealized representation of how they want their life to be perceived. For others, it is an outlet for anonymity, fantasy, or even duplicity (as in the case of catfishing).

At the same time, our online identity is also influenced by the platforms on which we post, as well as the content with which we interact. While academic discussion about online testimony and belief transmission is still relatively new, Regina Rini (2017) argues that individuals believe fake news posted on social media sites because these false statements are supported by people they (generally) know and trust. We can build upon this concern as it relates to our present discussion of digital blackface: In the same way that fake news is circulated and re-affirmed within the echo-chamber of our friend network, our friends and followers may be circulating and re-affirming implicit and explicit stereotypes when they view, like, or share gifs. As our online identity intersects with others’, we will—to some extent—reproduce the biases of those with whom we interact.

At this point, a skeptic about the harms of digital blackface might reply that, unlike traditional black face, gifs do not qualify as an expression of one’s identity, as one is merely pulling the source material rather than acting it out in their right. I disagree. When one uses a gif of a black person (sometimes accompanied by black language, for example, “bye Felicia”) they use such memes to alter their personal voice, expressing their own thoughts through the lens of a black culture, black language, and black bodies.

It is true that the analysis of digital performance identity requires a navigation of nuance, which offline performance identity does not. Online identities often obscure the race of the poster, as well as their intent.  It is for this reason that traditional blackface strikes us as so obviously bad (where both race and intent are clear), whereas the harms digital blackface may take time to disentangle and recognize.

So, we have identified a problem. Now what? As the New York Time’s Amanda Hess concludes: “None of this means that white people should only use white people gifs and black people should only use black people gifs, but it does mean that even something as seemingly simple as trying to express happiness on the internet is complicated by structural racism.”

This leads us to my ask: that white philosophers critically reflect on the potential harms before posting such gifs on the internet. Should they determine harm of the sort described might be done, they ought not post it.

But what of “gray area” gifs, which may be considered to be relatively neutral in that they do not obviously depict a stereotypical expression, or include black language? This may be an example. How might we distinguish between these cases (if we do take them to be benign) and their harmful counterparts?

To answer this question, first consider Jackson’s reporting on Meghan McCain’s (daughter of politician John McCain) pattern of digital blackface. The badness of McCain’s posts is cast into greater relief by her role as an outspoken conservative — the party that is overtly anti-immigration, pro-police militarization, and covertly pro-voter suppression.

Let us harness this contrast with the following rule of thumb. Call this the Meghan McCain test: Let x be a gif of a black person embodying black voice/expression/persona. If it would be intuitively wrong for Meghan McCain to post x, then you (if you too are white) ought not post it.

This rule errs on the side of caution, as the average white person will not be positioned quite so problematically on the axes of oppression. But, if we are looking to minimize unnecessary harm, this rule will provide good guidance by magnifying our intuition that posting x is wrong, and directing us against it.

As a white philosopher myself, I do not have the authority to speak for people of color nor about their lived experiences. Still, I do believe that philosophy of race charges white philosophers to do some of the work of educating other white philosophers about certain harms they themselves may be participating in, and how to off-set or mitigate such harms. (Otherwise, this labor would fall entirely upon philosophers of color, which would result in what Nora Berenstain has termed “epistemic exploitation”).

Elizabeth Williams and I have argued that there is a prima facie duty for people outside of an identity-group in question to defer to those who are within that identity-group regarding harms that they have experienced as members of that group. Therefore, should philosophers of color conclude differently than I on this topic, I would look forward to learning more.

Philosophers of race from Franz Fanon to Charles Mills have sought to expose the personal, political, and epistemic harms of racism in our systems and societies. The great lengths that these philosophers have traversed in order to bring these issues to the forefront of our discipline cannot be overstated. With the rise of normative epistemology in recent years, a discussion of digital blackface is timely. Certainly there are interesting ties to the epistemic injustice literature, as well as the aesthetics of humor (see Joseph Boskin’s work on complicity). And, should white philosophers continue to resist the conclusion of black academics (after sufficient time and deep reflection), there may be the added connection to Gaile Polhaus Jr.’s phenomenon of willful hermeneutical ignorance.

In sum, this article is meant to be a springboard from which we can have a larger discussion about digital blackface in philosophical circles. We know that systemic racism extends to our digital presence, and I have argued that the failure to intend these harms does not mitigate the effects of such harms. Thus, having learned about digital blackface, it is our duty to avoid it.

Acknowledgements: Thank you to my colleagues at Indiana University—Zara Anwarzai, Ricky Mouser, and Elizabeth Williams—as well as Jada Barbry—a local Black Lives Matter activist—for their careful feedback on this topic.

Savannah Pearlman is a Philosophy PhD Candidate at Indiana University – Bloomington. Her research focuses on normative epistemology, centering on the testimony of marginalized people, moral deference, and epistemic injustice.

Source: White Philosophers: It’s Time to Stop Using Digital Blackface

Be Excellent: How Ancient Virtues can Guide our Responses to the Climate Crisis

Written by Roger Crisp

After world chiefs and youth leaders gathered in September in New York at the United Nations Climate Action Summit, many of us as individuals are left feeling powerless and overwhelmed. Making big personal changes can appear costly in terms of happiness. And anyway, why should I bother when any difference I can make will be negligible? As we contemplate our future, we can seek insight from the great philosophers of the ancient world to guide our choices. Socrates wrote nothing, but many of his ideas found their way into the dialogues of his star pupil, Plato. The main character in these dialogues is indeed Socrates himself. In Plato’s most famous work, the Republic, Socrates suggests that he and those around him discuss the most important question there is: how one should live?

One of his interlocutors rather aggressively suggests that the answer is obvious: you should do anything that gives you money, power, and pleasure, and ignore all the constraints of so-called morality, which is just a con dreamed up by the powerful to keep us in our place.

The conflict between the personal goods that Thrasymachus (a sophist and character in the Republic) lists and virtue – or doing what’s right – is at the heart of the climate crisis. Should I take this highly paid job with an international oil company, or work for a local environmental charity? Shall I buy a new SUV, or donate the money and put up with the inconvenience of public transportation?

Socrates has some snappy responses for Thrasymachus, but they don’t really work, so he starts to develop a more sustained argument based on the idea of ‘the soul’ – that set of qualities that make us the kind of living beings we are.

There are three main aspects to the soul: reason, desire, and spirit (or emotion). Socrates suggests that the best souls will have reason in charge, shaping and directing the desires with the help of the emotions. So though I may want to be sitting right up there in that SUV, feeling powerful and loving the admiring looks I get from passers-by, what I should ask is whether this is what I have strongest reason to do. Sure, driving the big car might be fun but is it right?

Thrasymachus will of course say: ‘Obviously, yes!’. But Socrates compares souls to cities, and he compares the souls of people like Thrasymachus, who follow their desires wherever they lead, to tyrannies.

While it might seem great to be a tyrant, Socrates predicts that tyrants will slide into a frenzied search for ever greater power and pleasure, never tasting true freedom or friendship. His idea is that you are essentially a rational being, and if you are led by your desires then you are not actively living a life at all. Politicians, take note.

The apparent conflict between personal goods and virtue, then, is only apparent, since you will be not truly happy without virtue. And there does seem to be some modern support for the Socratic view from modern ‘positive psychology’. This relatively new field suggests many people who care about others and commit themselves to moral goals report higher well-being and a greater sense of meaning and purpose. Having this sense of life’s purpose has even been associated with a lower risk of death.

But at a time when change is needed, how do I decide exactly what the virtuous thing to do is? Here we can learn from Plato’s most famous pupil, Aristotle. Aristotle sees that human life consists in various spheres, and that each sphere can often be characterised in terms of a feeling or an action. Take anger. To live virtuously, you have to feel anger in the right way – at the right time, for the right reasons, towards the right people and so on. And there are two ways you can go wrong, by feeling anger in the wrong way, or by failing to feel anger in the right way. The moral life, then, consists in our attempt to hit the sweet angry spot between the two ways that anger can go wrong.

The analysis works in the same way with actions. Generosity is to do with giving away money, and that should be done in the right way. If you don’t do it in the right way, then you are stingy; if you do it in the wrong way, then you are wasteful.

In terms of our climate, this relates to the releasing of CO2. So the central environmental virtue is eco-sensitivity. Someone who releases carbon in the wrong way (leaving their engine running while they visit the shops, for example) is eco-insensitive, and there might also be cases of eco-oversensitivity (refusing to call an ambulance, say, for someone who needs urgent help).

Aristotle notes that we are often more attracted to one particular vice, and here that is of course eco-insensitivity. What we should do, then, is reflect on our own practice. If we think we are tending towards that vice, we should try to come closer to the mean. We don’t have to aim at perfection. We can take one step at a time in the right direction. And – if Socrates is right – we will find that we have not sacrificed much, if anything, of our happiness as we do so.

Since ancient times, other virtues have been added to the list, and hope is another important virtue in facing the environmental crisis.

The ancients did not really consider cases in which a group of people are doing something terrible, though each individual’s contribution makes no perceptible difference. One of the central environmental virtues here is collaborativeness, which leads us in the direction of political as well as personal action.

We are also more aware than the ancients were of the moral significance of allowing things to happen, as opposed to actually doing something about them. We need to take ‘negative responsibility’, not just stand by and watch as our planet is damaged.

Virtuous action is needed, from the individual up. Though modernity has made its own advances in ethics, by looking to the ideas of the ancient philosophers we can recognize that, at least as far as the climate is concerned, morality requires only small shifts in each moment rather than the seismic shifts that many might fear.

This blog is based on my Simone Weil lecture, presented in September 2019 at the Australian Catholic University in Brisbane and Melbourne. A version of the blog has appeared on The Conversation, and a fuller text of the lecture is available at ABC, Religion and Ethics. There is a discussion of the lecture on ABC’s The Philosopher’s Zone.

Source: Be Excellent: How Ancient Virtues can Guide our Responses to the Climate Crisis

Is crip the new queer?

I came across Robert McRuer’s ‘crip’ theory in 2015. At the time, I was preparing to write up my experience of touring India with a dramatic monologue that I had written, in verse, on life with my disabled son, Nihal, who died in 2001, and to dramatise our struggles for inclusion. The American theorist’s bold attempt to reclaim this highly charged word ‘crip’ felt at once startling and provocative. Disability politics had always felt challenging to me, never fitting comfortably with my experience: while my son was alive, I’d tended to stay clear of it.

Of course, the first step towards winding back long-held prejudices against disabled people would be to promote positive images of them, to uncover the history of their contributions to society and culture, and re-assert their universal human right to exist: any means, in short, to help them become more visible. In 2010, the political activist and trade unionist Richard Rieser set up UK Disability History Month to do exactly that, on the model of black and LGBT movements. But this is where the similarity with other marginalised groups ends, or ought to. Whereas one’s sex, race or sexual orientation become defined as ‘handicaps’ under systems of patriarchy, racism or heterosexism, impairment is a ‘handicap’ not merely because of disablism, but because it compromises the individuals who bear them, causing pain and suffering. This is irrespective of how far society evolves in its attitudes, or how far technology is effective in closing that gap.

This is not to promote the suffering-helpless-victims-deserving-of-charity narrative. That narrative is something the disability movement has long rejected in displacing the medical model of disability in favour of a social model, so that instead of seeing disability as an individual problem to be ‘cured’ and ‘treated’, the problem is recast in terms of how society and the physical environment have been structured. Thus, if wheelchair users cannot attend a meeting, it’s not because they are in wheelchairs but because no ramps have been provided.

The social model of disability emerged from the deliberations of the Union of the Physically Impaired Against Segregation (UPIAS). Pointedly, it distinguishes between disability and impairment: the first imposed by society as a form of exclusion, the second characterised by physical limitations. Since the UPIAS was mostly made up of individuals with spinal injuries, no attention was paid to mental health issues or learning difficulties. Still, as the disability activist Tom Shakespeare points out, the social model helped to improve self-esteem and build a positive collective identity in disabled people, while ‘in traditional accounts of disability, people with impairments feel that they are at fault.’

Coming up with a workable definition of impairment has itself been (and continues to be) a hot topic. Since the UK government has to arbitrate on matters of disability, perhaps it is best to quote their definition: ‘You’re disabled under the Equality Act 2010 if you have a physical or mental impairment that has a “substantial” and “long-term” adverse effect on your ability “to carry out normal day-to-day activities”.’ But the difficulties of categorisation are implicit in the long list of conditions. As Vic Finkelstein, a disability-rights activist who developed the social model of disability, put it in a paper he presented at an academic conference in 2001: ‘What was paramount was our focus on the need to change the disabling society rather than make us fit for society.’

Yet what about being fit for your own sake, as well as fit for society? In my son’s case we campaigned on both fronts. We battled for schools to admit him which entailed them doing a lot more to become accessible than simply providing ramps; they had to formulate and implement policies of inclusion and jumpstart a clear change in attitudes. At the same time, on the advice of doctors, we experimented with Botox to make his eating more efficient and his muscles less stiff and painful. Nihal underwent operations to prevent his hips becoming dislocated; he wore a variety of splints and braces. In his case, the line between chasing a ‘cure’ or improving his daily wellbeing was often blurred.

Now here was McRuer attempting to go further than the social model of disability by standing in permanent challenge to the assumptions of able-bodiedness: using a demeaning word for disabled people, ‘crip’, not cleaning it up, but holding it up as a mirror to the inadequacies of the notion of able-bodiedness. Riding on the coat-tails of queer theory to share its radical stardust, McRuer argues that, as norms, heterosexuality and able-bodiedness are invisible non-identities, and that queerness and disability can become desirable as visible identities of resistance. But do resistant identities such as ‘crip’ and ‘queer’ limit or expand the potential of the politics of transgression? More: does the act of transgression go far enough to change the material realities of these groups? While a politics that draws it sustenance from identity might initially be empowering, does it take you down a cul-de-sac or along wide, open roads?

At one level, the word ‘crip’ can be used the way I might use ‘wog’, to describe myself and shock outsiders who gasp at the political incorrectness of it. That they could never use the term, or challenge my badge-wearing use of it, gives me a certain degree of perverse pleasure. But I have no desire to reclaim ‘wog’ because the work required to transform racist institutions feels more pressing than reclaiming language, which in any case shakes off its baggage of prejudice and hate once society moves on.

I am grateful to Deborah Cameron, professor of language and communication at the University of Oxford, for pointing out that there are two kinds of reclamation: the kind that makes ‘a currently negative and offensive word neutral in all contexts and for all language users’; and the kind that involves ‘taking a negative and offensive word and using it with a positive meaning, as a symbol of identity and badge of radical in-group membership’. The first category includes descriptors such as ‘black’ and ‘woman’ – both previously pejorative terms that have acquired preferential status. The second includes words such as ‘crip’ and ‘queer’ that would lose all their transgressive power if defanged. ‘Black’ and ‘woman’ have become descriptions whereas ‘queer’ and ‘crip’ are provocations. Their proponents wish them to remain so.

Political movements use words such as ‘crip’ or ‘queer’ to re-examine society’s deeply problematic relationship with the ‘other’. Then, this becomes a politics of transgression rooted in identity. As Cameron explains (via an email interview), it’s transgressive ‘because the meaning of the gesture actually depends on the word continuing to be negative for normies, which allows it to retain some shock value: the in-group meaning riffs off the mainstream negative meaning, and this kind of reclaiming usually entails that only in-group members and close allies can use the word in question without giving offence.’

In order to remain transgressive, these identities must embrace marginalisation

In the gay community, there are pockets of resistance to the use of ‘queer’ as an identity, although it is hard to estimate the size of it. Cameron reminds us that the movement to reclaim ‘queer’ has been going on for 30 years, and wonders whether ‘its older use as a homophobic slur will ever be completely superseded’. Below-the-line discussions on the use of ‘queer’ reveal varied and inevitably heated views. Some find it a harmful term because of its long history of abuse against gay men, although the general consensus is that it is ‘important to have a word that encompasses all non-straight, non-cisgender identities. So far, the word that’s caught on is “queer”. It is clearly not favoured by everyone, but no one who objects to it … suggested a similar all-encompassing term.’ It is precisely this ‘all-encompassing’ aspect of ‘queer’ that led the journalist Jenna Wortham in The New York Times Magazine to ask: ‘When Everyone Can Be ‘Queer’, Is Anyone?’ (2016).

Wortham argued that ‘“queer” has come to serve as a linguistic catch-all for this broadening spectrum of identities’, so much so that it can be claimed by anyone identifying as non-heteronormative. She has a point. Facebook users can now write their own gender description but as many as 71 gender and sexuality options have been counted in the past. A recent coinage currently doing the rounds is ‘sapiosexual’, meaning that the person is attracted to intelligence before gender. Queer politics has evolved almost unrecognisably since the Queer Nation Manifesto was handed out at the Pride march in New York City in 1990, addressed to ‘brothers and sisters’ – a gender binary that today would seem inconsistent with a ‘queer’ identity.

Hugh Ryan, founder of the Pop-Up Museum of Queer History, worries that the catch-all phenomenon that Wortham identifies effectively depoliticises the word. As he put it in the article ‘Why Everyone Can’t Be Queer’ (2016) for Slate magazine, ‘Queer is the battle cry of deviancy,’ and, therefore, by definition exclusive. He goes on to say: ‘When we remove the focus on stigmatisation from the word queer, we evacuate it of the only thing that made it a coherent identity in the first place.’ According to Ryan, there might come a time when gay men and lesbians will no longer be considered deviant and would, therefore, no longer be queer. In Crip Theory (2006), McRuer, quoting the American theorist David Halperin, likewise argues that the coherence of the term ‘crip’ relies on its ‘positionality’, the way it functions ‘oppositionally and relationally … as a resistance to the norm’ (ironically setting up the very binary that both theories disdain). This contestation and tension is central to retaining the political power of the word. In order to remain transgressive, these identities must embrace marginalisation.

Of course, newly politicised movements often struggle with the conflicts inherent between marginalisation and mainstreaming. There is a sense of strength in numbers: the larger the group that is disempowered, the stronger the argument for rights; the more significance they have as a vote bank, and the more muscle they can exercise as a consumer group. Indeed, there are commentators who take the line that ‘we’re all disabled’, because the perfect body is a myth; because, over time, ageing is likely to disable most of us, and because the line between illness and disability can be blurred, often deliberately so, by some disability activists. However, an argument that promotes acceptance on the basis of universality must then relinquish its status and pride in being ‘other’.

Of the two terms, ‘queer’ has the longer history, having first been adopted in the 1970s; ‘crip’ theory came along in the 1990s. Queer has been the more theorised and given the stamp of respectability (although respectability is death to transgressive identities) by heavyweights such as the American philosopher and gender theorist Judith Butler. McRuer extends Butler’s argument about the performative character of heterosexuality (the norm that requires constant repetition to shore up its unstable hegemony), to argue that able-bodiedness is also a performance. There is some validity to his notion of ‘compulsory able-bodiedness’ – a term he adapts from Adrienne Rich’s critique of ‘compulsory heterosexuality’ – ‘through which lesbian experience is perceived on a scale ranging from deviant to abhorrent, or simply rendered invisible’.

For McRuer, compulsory able-bodiedness ‘repeatedly demands that people with disabilities embody for others an affirmative answer to the unspoken question, Yes, but in the end, wouldn’t you rather be more like me?’ This might explain why a deaf person chooses to wear an almost invisible hearing aid because it enables them to pass as hearing. It is an attempt to perform able-bodiedness. For a range of disabilities, however, it is not possible to pass or perform able-bodiedness, regardless of societal compulsion.

McRuer complains that most people, primed by the rise of queer movements, now agree that ‘ways of being heterosexual are culturally produced and culturally variable’, but they will not extend that same understanding to able-bodiedness. I am one of them. He attacks Norah Vincent, a lesbian journalist, for writing in Salon in 1999: ‘it’s hard to deny that something called normalcy exists. The human body is a machine, after all – one that has evolved functional parts … This is science, not culture.’ Although I might substitute the word ‘normalcy’ with ‘biology’, it’s hard to quibble with the basic truth of Vincent’s statement. McRuer’s attack on her is reminiscent of the attacks by some trans activists on feminists who argue that there are important biological factors that underpin the claim of women as a separate sex.

Disability, unlike the queer demographic, cannot be exploited by or absorbed into corporate capitalism

Although heterosexuality can be seen as a cultural construct, able-bodiedness has a hard material reality: there is a connection between form and function. The fashion to disregard biology has been taken to absurd extremes. What began as a necessary critique of the way that social constructs justified their oppressive power on the basis of biology has led to the argument that everything is a social construct – even sex and an able body. To me, the idea of performative identities also suggests choice, even if ‘chosen’ under social pressure, and, while sexuality could be (at least in part) subject to choice, disability is not (usually). When McRuer decided to ‘come out crip’ at an AIDS conference, self-identifying as HIV-positive when he was not, he was performing disability, ie adopting an identity not based in the reality of impairment.

McRuer argues that ‘orderly, coherent (or managed) identities’ are the essence of neoliberalism, and that resisting them must be expressed through embracing the disorderly identities of queerness and disability. He argues that neoliberalism demands compulsory able-bodiedness and compulsory heterosexuality. While I agree that disability is a net loss for neoliberalism, this is not true of ‘queer’ and the multiple ‘other’ identities of LGBT+ communities that provide commercial opportunities for new markets. It was, after all, a Conservative government that legalised gay marriage in the UK. As Deborah Cameron told me: ‘I’d say middle-class gay men have been seen as a desirable market for decades, and now LGBT people can marry and have families there’s even more money to be made from anything from gay wedding/anniversary cards to the exploitation of women in the global South as surrogates for gay couples.’ Wortham makes a similar point: ‘Increased acceptance of queerness has only led to increased commodification. Every June, the month of most gay-pride celebrations, companies like Netflix, McDonald’s, Apple, Salesforce and Walmart spend tremendous amounts of money to include their branded floats in the parades.’

Disability cannot be similarly commodified, not least because the equipment needed by disabled people can be so bespoke that there are no economies of scale. It is precisely because disability, unlike the queer demographic, cannot be exploited by or absorbed into corporate capitalism (with the notable exception of some sectors such as mental health, where the rise of antidepressant medication has meant huge profits for Big Pharma) that McRuer believes that ‘cripping that future’ – that is, imagining a world in which disability is both ‘possible and desirable’ – might be a useful political strategy to oppose corporate capitalism and globalisation.

Coalescing around a particular group identity can feel liberating. It attracts attention to your cause, and ensures that you’re not an adjunct to someone else’s. It validates your subversion of mainstream narratives, and authenticates your point of view. Speaking on your own behalf can be deeply empowering, just as belonging to a community of people making the same demands is hugely uplifting. The slogan ‘nothing about us without us’ became widely adopted by the disability movement. Southall Black Sisters (SBS), an activist and support group in West London for women escaping violence, has been my political home for 30 years. Our internal debates have fine-tuned my political understandings and given me the confidence that comes from knowing that I am not alone in my particular critique of society.

Along with agitating for better representation, the drive to acknowledge social diversity and to campaign for the equality of disempowered minorities are important political aims. The first Disability Pride Day was celebrated in Boston in 1990, with disability parades held across the US since then. In the UK, the first Disability Pride march was in Brighton in 2016. For many disabled activists, choosing to parade their disability rather than hide allows them to counter the shame they’ve been made to feel and the discrimination they’ve faced. Today, the neurodiversity movement is thriving, with many activists on the autistic spectrum positively identifying as ‘Aspies’. At the extreme end of re-framing one’s identity, Zainab Al-Eqabi, an Iraqi woman with an artificial leg, delivered a TEDx talk in 2014 which she said: ‘I have a huge love for my disability … It is the best thing that has ever happened to me.’

But not all disability activists are comfortable embracing this identity as a political statement. Tom Shakespeare wrote in 2010:

it is harder to celebrate disability than it is to celebrate Blackness, or Gay Pride, or being a woman. ‘Disability pride’ is problematic, because disability is difficult to recuperate as a concept, as it refers either to limitation and incapacity, or else to oppression and exclusion, or else to both dimensions.

Here we come up against the limitations of identity politics: to wit, sharing an experience is no guarantee of coming to the same political demands.

This is exactly the problem with reading ‘crip’ through ‘queerness’. It obscures important differences. Attempting to bolster your own political case by drawing parallels with other minorities can become untenable because different histories of discrimination and different material realities are in play. There was the case in 2002 of the deaf lesbian couple in the US who chose a deaf donor in order to have a deaf child because, as they told The Washington Post, they saw deafness as a cultural identity, not a disability. Defending their decision to choose a deaf donor, they drew on an analogy with race, positing a hypothetical couple’s right to choose a black donor, regardless of the difficulties that child might face in a racist society. In my view, that is not a comparison that holds water. A deaf child will always face significant challenges in being part of mainstream culture, apart from missing out on the sheer joy of music or birdsong, while a far less racist society is something that we can strive to achieve. It also suggests that hearing children will reject their parents’ culture – a fear that could stop migrants from having children – but I have seen such children happily negotiating both worlds.

Some disability scholars, such as Kirstin Marie Bone of the University of Alabama, have also criticised crip theory for positioning itself as a subset of queer politics, thereby further silencing disabled voices. The same might be said about the erasure of lesbians, as a category, under the broader label of ‘queer’. Even lesbians who identify as queer because trend-conscious activism or even internalised homophobia have led them to see lesbians as ‘unfashionable, uncultured homebodies’ complain that ‘in [Washington] DC, as in most places, queer parties that get labelled without a gender often default to gay men’, as Christina Cauterucci wrote in Slate in 2016.

Queer activists and scholars often fail to support Asian women’s internal battles against religious conservatism

Crip defaults to physical disability in just the same way. People with physical disabilities have been ‘at the top of the disability hierarchy’, according to activists such as Mark Sherry, a sociologist at the University of Toledo in Ohio. They’ve been the main drivers of change – which accounts for the wheelchair becoming the universal symbol of accessibility. Nor does ‘crip’ tackle the vulnerability of disabled women to sexual abuse or violence, or the problems faced by black, Asian and minority-ethnic disabled people in relation to immigration policies, police racism and the like. That’s why Sherry believes that most disabled people reject ‘crip’: because it ‘masks enormous embodied, classed, gendered, sexualised, racialised privilege’ and doesn’t speak to their lived reality.

There is a similar disconnect between ‘queer’ and the communities that lay claim to the label by virtue of not being white, middle-class, heteronormative cis-men. This bête noir draws all the fire, while regressive trends and power imbalances among disempowered minorities go unchallenged. It explains why queer activists and scholars often fail to support Asian women’s internal battles against religious conservatism: because ‘Muslims’ are a besieged minority.

Earlier this year, the Parkfield Community School in Birmingham faced months of noisy protests outside its gates from Muslim parents opposed to the teaching of an LGBT equality agenda. The protests were orchestrated by individuals involved with StopRSE, a campaign set up to oppose the relationship education due to become compulsory in UK primary schools and the sex education due to become compulsory in secondary schools in 2020. This June, SBS attended a meeting organised to express solidarity with the embattled school head. Yet solidarity failed to arrive from those quarters where it might be most expected. A letter from LGBT+ supporters published in the Independent newspaper in September argued that they could not support the school in question – despite supporting relationship and sex-education (RSE) teaching in general – because ‘the wider embrace of LGBT+-inclusive RSE as the poster-child for the implementation of “Fundamental British Values” suggests a colonial “civilising” attitude towards Muslim communities, and contributes to a harmful and inaccurate stereotype of an uncivilised and intolerant Muslim culture’. The letter failed to notice that the protests against sex-ed teaching were orchestrated by Islamic fundamentalists using those very Muslim parents to further their own deeply regressive agenda.

A politics of transgression, rooted in identity politics, with virtually no reference to class, no real analysis of power, and no attempt to differentiate between discrimination and exploitation is bound to be limiting. Besides, collapsing radical politics into a struggle for self-identification is a form of rebellion that slips easily into the folds of neoliberalism. The flattening effect is succinctly illustrated by Amandla Stenberg, The Hunger Games (2012) actor who came out as bisexual in 2016, telling reporters: ‘I definitely believe in the concept of rebellion through selfhood, and rebellion through embracing your true identity, no matter what you’re being told.’ Rebellion here is reduced to reshaping your individual self as an act of transgression, rather than looking at transformation of institutional structures.

In her essay ‘Globalising Capitalism and the Rise of Identity Politics’ (1995), the sociologist Frances Fox Piven at the City University of New York goes directly to the central flaw in identity politics:

Class politics, at least in principle, promotes vertical cleavages, mobilising people around axes which broadly correspond to hierarchies of power, and which promote challenges to these hierarchies. By contrast, identity politics fosters lateral cleavages which are unlikely to reflect fundamental conflicts over societal power and resources and, indeed, may seal popular allegiance to the ruling classes that exploit them.

Even where there is an attempt to open up the cul-de-sac of identity politics through intersectionality or fluidity of borders, the reference point remains ‘identity’.

Solutions for solidarity politics don’t have to be found within the narrow confines of identity. If outsider clout is what queer and crip activists are seeking, the question that follows is whether they are putting this clout to any transformative effect. In her essay ‘Trapped Behind the Glass’ (2017), Bone would argue not. Since crip theory first emerged, she says, ‘there have been few major legal advances since the Americans with Disabilities Act in 1990 – the most significant being the Patient Protection and Affordable Care Act 2010, which protected those with pre-existing conditions and removed healthcare caps.’ Back in the 1960s, when women won the right to abortion in the UK, it was seen as transgressive, as a threat of family life, and it led to a change of women’s material reality. Today, such a campaign would likely get bogged down in discussions of whether or not it was trans-exclusionary.

Identity politics might be the starting point of radical politics but it must not be the end point. It can be the trigger but not the goal. Much of queer and crip theory insists on its marginal status as part of its subversive appeal. But subversion to what end, if not to change the system? Therein lies the fatal contradiction: any reduction in inequality (and total equality is possible only with system change) and any increase in social inclusion would undermine the marginal status to which these theorists are so wedded.

The politics of transgression uses shock to question our values and beliefs. But because it registers low on the Richter scale of societal transformation, it effects minor damage that requires redecoration, not the landslide change that demands complete rebuilding.

Source: Is crip the new queer?

Cross Post: Is Virtue Signalling a Perversion of Morality?

Written by Neil Levy

Originally published in Aeon Magazine

People engage in moral talk all the time. When they make moral claims in public, one common response is to dismiss them as virtue signallers. Twitter is full of these accusations: the actress Jameela Jamil is a ‘pathetic virtue-signalling twerp’, according to the journalist Piers Morgan; climate activists are virtue signallers, according to the conservative Manhattan Institute for Policy Research; vegetarianism is virtue signalling, according to the author Bjorn Lomborg (as these examples illustrate, the accusation seems more common from the Right than the Left).

Accusing someone of virtue signalling is to accuse them of a kind of hypocrisy. The accused person claims to be deeply concerned about some moral issue but their main concern is – so the argument goes – with themselves. They’re not really concerned with changing minds, let alone with changing the world, but with displaying themselves in the best light possible. As the journalist James Bartholomew (who claimed in 2015 to have invented the phrase, but didn’t) puts it in The Spectator, virtue signalling is driven by ‘vanity and self-aggrandisement’, not concern with others.

Ironically, accusing others of virtue signalling might itself constitute virtue signalling – just signalling to a different audience. Whether it should be counted as virtue signalling or not, the accusation does exactly what it accuses others of: it moves the focus from the target of the moral claim to the person making it. It can therefore be used to avoid addressing the moral claim made.

Here, though, I want to consider a different issue. In the only full treatment of the topic in the academic literature (that I know of), the philosophers Justin Tosi and Brandon Warmke accuse the ‘moral grandstander’ (their term for the virtue signaller) of perverting the function of public moral discourse. According to them, ‘the core, primary function that justifies the practice’ of such public moral discourse is ‘to improve people’s moral beliefs, or to spur moral improvement in the world’. Public moral talk aims to get others to see a moral problem they hadn’t noticed before, and/or to do something about it. But, instead, virtue signallers display themselves, taking the focus away from the moral problem. Since we often spot virtue signalling for what it is, the effect is to cause cynicism in the audience, rather than to induce them to think the signaller is so great. As a result, virtue signalling ‘cheapens’ moral discourse.

But Tosi and Warmke offer no evidence for their claim that the primary, or the justifying, function of moral discourse is improvement in other people’s beliefs or in the world. That’s certainly a function of moral discourse, but it’s not the only one (as they recognise).

Perhaps, in fact, virtue signalling, or something like it, is a core function of moral discourse.

Signalling is very common in nature. The peacock’s tail, for instance, is a signal of evolutionary fitness. It’s what biologists call an honest signal, because it’s hard to fake. It takes a lot of resources to build a tail like that, and the better the signal – the bigger and brighter the tail – the more resources must have been devoted to it. Stotting – a behaviour seen in some animals, involving leaping straight up in the air, with all legs held stiffly – is probably also an honest signal of fitness. The gazelle who stotts vigorously demonstrates to potential predators that it’s going to be hard work to run it down, which might lead the predators to look for easier prey. Humans also engage in signalling: wearing an expensive suit and a Rolex watch is a hard-to-fake signal of wealth and might help to communicate that you’re a suitable trading partner or a desirable mate.

In the cognitive science of religion, it is common to identify two kinds of signals. There are costly signals and credibility-enhancing displays. The peacock’s tail is a costly signal: it takes a lot of energy to build it and drag it around, and it gets in the way when fleeing predators. Credibility-enhancing displays are behaviours that would be costly if they weren’t honest: for example, the animal who ignores a nearby intruder not only communicates to group members its belief that the intruder isn’t dangerous, but does so in a way that certifies the sincerity of the communication because, if the intruder was dangerous, the signalling animal itself would be at risk.

Lots of religious behaviour can be understood as costly and credibility-enhancing signalling. Religions mandate many behaviours that are costly: fasting, tithing, abstinence from sex except in certain contexts, and so on. All of these behaviours are costly not only in everyday terms, but also in evolutionary terms: they reduce opportunities for reproduction, resources for offspring, and so on. Religious activities are also credibility-enhancing displays of religious belief: no one would pay these costs unless they really believed that there was a payoff.

Why, from an evolutionary point of view, would someone signal religious commitment? A likely explanation is that the function is to secure the benefits of cooperation. Cooperation with others is often a risky activity: there is the constant possibility that the other person will free-ride or cheat, making off with the benefits without paying the costs. The more complex the social group, and the easier it is to move between groups, the higher the risks: whereas in small groups we can keep track of who is honest and reliable, in a large group or when interacting with strangers, we can’t rely on reputation.

Signalling helps to overcome the problem. The religious person signals her commitment to a code, at least of cooperating with the ingroup. She signals her virtue. Her signal is, by and large, an honest signal. It is hard to fake, and religious groups can keep track of the reputation of their members if not of everyone else, since the pool is so much smaller. This kind of explanation has been invoked to explain the prominence of Quaker business people in the early years of the industrial revolution. These Quakers trusted one another, in part because involvement with the Society of Friends was an honest signal of willingness to abide by codes of ethics.

Religious signalling is already moral signalling. It is hardly surprising that, as societies secularise, more secular moral claims come to play the same role. Virtue signalling is supposed to be signalling to the ingroup: it shows that we are, by their lights, ‘respectable’ (in Tosi and Warmke’s word). That’s not a perversion of the function of morality; it is moral discourse playing one of its central roles.

If such virtue signalling is a central – and justifying – function of public moral discourse, then the claim that it perverts this discourse is false. What about the hypocrisy claim?

The accusation that virtue signalling is hypocritical might be cashed out in two different ways. We might mean that virtue signallers are really concerned with displaying themselves in the best light – and not with climate change, animal welfare or what have you. That is, we might question their motives. In their recent paper, the management scholars Jillian Jordan and David Rand asked if people would virtue signal when no one was watching. They found that their participants’ responses were sensitive to opportunities for signalling: after a moral violation was committed, the reported degree of moral outrage was reduced when the participants had better opportunities to signal virtue. But the entire experiment was anonymous, so no one could link moral outrage to specific individuals. This suggests that, while virtue signalling is part (but only part) of the explanation for why we feel certain emotions, we nevertheless genuinely feel them, and we don’t express them just because we’re virtue signalling.

The second way of cashing out the hypocrisy accusation is the thought that virtue signallers might actually lack the virtue that they try to display. Dishonest signalling is also widespread in evolution. For instance, some animals mimic the honest signal that others give of being poisonous or venomous – hoverflies that imitate wasps, for example. It’s likely that some human virtue signallers are engaged in dishonest mimicry too. But dishonest signalling is worth engaging in only when there are sufficiently many honest signallers for it make sense to take such signals into account. While some virtue signallers might be hypocritical, the majority probably are not. So on the whole, virtue signalling has its place in moral discourse, and we shouldn’t be so ready to denigrate it.

Source: Cross Post: Is Virtue Signalling a Perversion of Morality?

Who Decides what is Real?

I look out on the sea of shaved heads and maroon robes. The monks sit on thin mats cross-legged in the plain classroom. The air shimmers with monsoon heat. One young monk puts up his hand. “Is the big bang real?” He asks. I pause before answering. The fact that the universe evolved an early, hot and dense phase is supported by a web of observational evidence. On the other hand, the origin is projected to be smaller than a sub-atomic particle, a state of infinite temperature and density that can’t be understood in any physical theory. “I don’t know,” is my honest answer.For a decade, I’ve been traveling to the Himalayan foothills to teach Buddhist monks (and more recently, nuns) cosmology. “Science for Monks” was started at the direction of His Holiness the Dalai Lama, who was worried that the Buddhist monastic tradition did not include math and modern science. These monks are Tibetans living in exile in India. Most left Tibet as small children and many will never see their families aga…

Source: Who Decides what is Real?

Ep. 230: Bruno Latour on Science, Culture, and Modernity (Part Two)

Continuing on Latour’s We Have Never Been Modern (1993) with guest Lynda Olman.

Latour is challenging the idea of objective truth totally apart from perceivers; so is he an idealist? He claims that he is not; he’s not even a strong social constructionist. We lay out the “Constitution” of modernity that keeps science and politics separate, how this way of thinking about things makes it difficult for us to address issues like climate change, and we get into Latour’s positive account for what should replace this Constitution.

Start with part 1 or get the full, ad-free Citizen Edition. Please support PEL!

End song: “Mono No Aware” by Guy Sigsworth, as discussed on Nakedly Examined Music #109.

Sponsors: $10 off at (code PEL), 20% off at (code PEL), learn about St. John’s College at, and give effectively through

Source: Ep. 230: Bruno Latour on Science, Culture, and Modernity (Part Two)

Ways of living

At the start of the first TV episode of Ways of Seeing, John Berger takes a scalpel to Botticelli’s Venus and Mars. The opening beat of the programme is the audio of the incision – the blade’s rough abrasion on canvas – before the soundtrack settles into voiceover. ‘This is the first of four programmes,’ Berger says, ‘in which I want to question some of the assumptions usually made about the tradition of European painting. That tradition which was born about 1400, died about 1900.’

Ways of Seeing first aired on Sunday evenings on BBC2 at the start of 1972. It attracted few initial viewers but, through rebroadcasts and word of mouth, the show gathered steam. By the end of 1972, it had gone viral. People in London and New York argued about Berger’s ideas. When Penguin commissioned a paperback adaptation, the first two print runs sold out in months. Regularly assigned in art schools and introductory art history courses, Berger’s project has never really waned in popularity. That first episode now has close to 1.4 million views on YouTube, and the paperback regularly sits atop Amazon’s Media Studies bestseller list.

For decades, Berger’s name has been shorthand for the series, which has been shorthand for a certain style of combative, materialist art criticism. Often presented as a riposte to Sir Kenneth Clark’s TV series Civilisation (1969) – Berger himself spoke of it as a ‘partial, polemical reply’ – the show attacked Clark’s school of connoisseurship ‘with a razor’. Suave, moneyed, knighted at 35, Clark was the embodiment of the high-cultural mandarin: art existed for the pleasures it afforded those refined enough to feel them. Berger was a self-styled outsider: he had run away from boarding school as a teenager, and left England for France in his 30s. Art was best, he said, when it was born of struggle and inspired belief. At its worst, it was little more than a luxury good. The difference extended to the very mode of aesthetic response – appreciation or critique? This is the significance behind the act of vandalism that opens Ways of Seeing. Viewers soon learn that the painting Berger cut was a facsimile, but the metaphor of the scalpel is plain: to question is to dissect. It is to cut past the scrim of beauty, and reveal more fundamental anatomies: capitalism, colonialism, patriarchy, mimetic desire.

It is a move that has only grown in ubiquity ever since. The feminist art historian Griselda Pollock remembers the ‘moment’ of Berger’s appearance – 1972 – as a primal scene of method. After the show, scholars began to turn away from connoisseurship toward what Pollock has called the ‘analysis of power and the deconstruction of classed, raced, and gendered meanings’. Ways of Seeing became an urtext of critique, a work that captured young imaginations, and changed the way that people saw and understood the world. Close to 50 years on, Pollock’s description still applies to most of the work done by humanities scholars and, more and more, mainstream cultural journalism too. From the arts and culture pages of The Guardian or The New York Times to the latest hot takes on Twitter, what criticism has come to mean is what Berger pioneered. In an age of open media, the implications are vast. If the internet has made all of us critics, that means we are all now foot soldiers in a culture war: self-armed semioticians and practiced deconstructors of political signification.

As is the case with most viral content, nobody expected Ways of Seeing to travel so widely, least of all its authors. Kept to a tight budget, the show was filmed in a rented electrical goods warehouse in Ealing, a west London suburb. Berger worked on his voiceover at his parents’ apartment on Hallam Street, in the imposing shadow of the BBC’s Broadcasting House. After the series aired, the arrangement of the book was anything but streamlined. Berger worked with his creative partners (Mike Dibb, Richard Hollis and Sven Blomberg) in a manner more closely resembling the bricolage of a zine than the strategic making of a bestseller. It was a principled if madcap route to fame, part of a broader revolutionary mood. Later that same year, on receiving the Booker Prize for his novel G. (1972) – a sexual bildungsroman set in prewar Europe – Berger announced on stage that he was sharing half the prize money with the London-based Black Panthers. Of course, fame can be secretly coveted only for the privilege to cast it off afterwards. But the one-two punch of Ways of Seeing and the Booker scandal were decisive. Taken together, they turned Berger into a star.

Like beauty, provocation can hide as much as it reveals. Time brings new colour to old materials, and what makes Ways of Seeing so enduring might not be the same as what made it so electrically influential when it first appeared. We are now more aware of the fissures in the show, in its slight hesitations and indecisions, and in the hedges to what was otherwise such a freight train of an argument. The pictorial tradition of the female nude, Berger argues throughout the second episode, was not a celebration of humanist virtue but a fantasy of the acquisitive ‘male gaze’ (the term was coined a year later by Laura Mulvey). But then, as if in a footnote, he adds a hushed caveat, noting the ‘few exceptional nudes’ that were expressions of the painter’s love. There are similar equivocations at the end of nearly every episode. What of the masters of the tradition? What of its rebels? What of the mystery – beyond the ideology – of art? What of those anonymous works not held in any museum but exchanged between friends and partners? And what of the most modern art form of all – the art that comes to us on a screen?

In retrospect, Ways of Seeing was not only about painting but also television. More specifically, it was about painting-as-seen-on-television, which is to say it was about the transition from one medium to another, one tradition to another, maybe even one epoch to another. In short, it was about the severing of roots. Just after Berger cuts out the head of Venus from the Botticelli, we see her cropped portrait run through an industrial printer, multiplied ad infinitum and set in motion along the circuits of mass exchange. The movement finds its outward echo in the following shot: the silhouette of a television monitor against a blue screen.

From the oil painting to the printing press to the cathode-ray tube of TV: beyond the simple aggression of a razor, the opening of Ways of Seeing presents a filmic reenactment of the argument of Walter Benjamin’s essay ‘The Work of Art in the Age of Mechanical Reproduction’ (1936). (One of the chief legacies of the show was helping to launch Benjamin to the front of the critical canon.) Writing during the terrifying onrush of fascism, Benjamin saw the crisis of European liberalism as, in part, a result of the emergence of new media. The advent of photography, the phonograph and other machines of automated replication had produced a more disturbing change in social consciousness than others had recognised. The CliffsNotes version of the essay focuses on Benjamin’s notion of the aura, the idea that reproduction severs artworks from their anchors in space and time, that facsimiles lack something that originals possess. But this was only half of his argument. Benjamin was just as interested in the entire network of mass mediation (as a replacement of art) and the new, seemingly unanchored artform of film. These, he believed, were part of a broader shift that meant nothing less than ‘the shattering of tradition’ and the ‘liquidation of the value of tradition in the cultural heritage’. As new forms of technological culture replaced the old – and the argument will be familiar to anyone who has paid attention in the past several years to discussion of the internet – civilisation moved into a halfway house of mediation, susceptible to new modes of political adventurism and mass behaviour.

Benjamin’s essential concept of remediation has come to denote the process by which an older medium is represented in, or mimicked by, a newer one (as well as the inverse). The yellow sticky notes on your laptop or the painting app on your phone are common examples. Ways of Seeing was itself one of the most ambitious, self-reflexive projects of remediation of the entire postwar period. Building on André Malraux’s concept of a ‘museum without walls’, Berger built a museum of the airwaves. He presented at an often dizzying pace: Botticelli, Leonardo, van Eyck, Bruegel, Rembrandt, Van Gogh, Caravaggio, Goya, Hals (all in the first episode). Berger was bringing painting into what Raymond Williams called ‘an irresponsible flow of images’ characteristic of television. It was an early harbinger of the waterfall scroll of Instagram or Google Images.

Remediation has been theorised by contemporary scholars in relation to adaptation, translation, perspective, realism, transparency, sampling, recyclage and the user interface. For Berger, it was always connected to something more fundamentally human: the experience of migration. What does it mean to be uprooted, removed from an original source, and placed into new surroundings? And what does such an otherwise intimate experience reveal of the creative-destructive engines of modernity?

Berger’s best essays convey a miraculous gratitude that the world comes into view at all

At the start of the 20th century, a number of Central European critics raised these questions with special force. From the Leftist philosopher Georg Lukács (who spoke of the modern era as one of ‘transcendental homelessness’) and his friends Béla Balázs and Karl Mannheim, to the Heidelberg circle around Max Weber, including Ernst Bloch, to Benjamin, Theodor Adorno and the other members of the Frankfurt School, the generation coming of age amid the crises of fin-de-siècle Europe excelled at feeling (and analysing) the disorienting, everyday effects of capitalist progress: alienation, solitude, fragmentation, a sense of spiritual orphaning. (The style also captured the imaginations of many on the Right, including Martin Heidegger and Mircea Eliade.)

Born a generation later, Berger became perhaps the most important critic to extend their intellectual project into the postwar English-speaking world, and then into the postmodern era of high globalisation. He worked within what might be called a ‘warm current’ of the European Left: an anticapitalist humanism less interested in structural analyses of exploitation (though Ways of Seeing had its dose of structuralism) than in ground-level questions of meaning and experience. In a modern world that Weber described as disenchanted, the qualitative virtues of traditional societies had been replaced by a ‘machine mentality’ whose metrics of self-advancement had to be expressed in numerical terms – money, productivity, efficiency. This was part of a larger desire to reduce all of nature to figures and formulae, eliminating the first-hand power of the senses: the visible and the audible, the palpable and the ineffable.

On a formal level, Berger was obsessed by the arts of sight: drawing, painting, photography, cinema. He often wrote about appearances directly, conjuring small physical presences as few others could: the way that a lizard shimmies as it moves, the warmth of grass in the sun, the ‘red of young eyelids shut tight’. His best essays convey a miraculous gratitude that the world comes into view at all. Berger was anything but pedantic. He was friends with academics, including famous ones, but his style was anathema to the learned and world-weary. The renowned literary critic Frank Kermode once wrote to Berger remembering a stay in his ‘peculiar paradise’ in the Vaucluse in southeastern France, so different from the ‘low morale’ and ‘vanity’ of Cambridge.

Ways of Seeing has had its impact on the discipline of art history – as both grenade and leveller – even as Berger remained uninterested in the kinds of questions that art historians tend to pose. He was drawn instead to far more religious themes: longing and exile, encounter and estrangement, leave-taking and return. His greatest legacy might lie in the unique ways in which he combined these two spheres – the visual and the existential – both of which have their roots in evolutionary biology. (Visual areas account for a large portion of the cortical surface of the human brain, while the prefrontal cortex deals in memory and those cognitive processes that help to found a coherent self.) Berger was one of the few modern writers to have trafficked so regularly between the world of ideas and the world of things. As he later reflected, it was perhaps his early work in television, with its voiceover and film track, that helped him to synthesise his love of both words and images, thinking and seeing.

‘The way in which human perception is organised,’ Benjamin wrote, ‘is conditioned not only by nature but by history.’ For Berger, the changes to visuality in the 20th century must be understood in relation to the qualitative dimensions of its historical watersheds. Close to 20 years after Ways of Seeing, he wrote of the advent of cinema in relation to the experience of exile. He saw cinema and exile as intertwined, part of an intimate dialogue between presence and absence. To film anything is to safeguard it for the future, and so to foresee its eventual loss. It is to watch a set of moments pass into a separate realm both inside and outside of time. ‘In the sky of cinema,’ Berger wrote, ‘people learn what they might have been and discover what belongs to them apart from their single lives.’ The century of film was also a century of transport, emigration, disappearance, uprooting. ‘Painting brings home,’ he concluded. ‘The cinema transports elsewhere.’

That distinction emerges as the heart of Ways of Seeing. As a film about painting, it was the hinge on which the programme was built: between locomotion and stillness, sound and silence, a blue screen and canvas. ‘With the invention of the camera everything changed,’ Berger tells us in the first episode. European painting once gathered the visible world into fixed scenes of static permanence. But film meant ‘we could see things that were not there in front of us’. Appearances entered a state of motion and flux. They began to travel across the world. ‘It was no longer so easy to think of appearances always travelling to a single centre.’

‘A single centre.’ This might be another word for a home – that place, as the poet W H Auden put it in ‘Detective Story’ (1937), ‘where the three or four things/that happen to a man do happen’. For Berger, the need for a home was part of human nature, dating back thousands of years, at least to palaeolithic dwellings and the transition from nomadism to agriculture. In an essay first published as ‘A Home Is not a House’ (1983), curiously prompted by Steven Spielberg’s film ET (1982) and its global popularity, Berger considered more archetypal beginnings. The term ‘home’, he admits, has been long taken over by the moralising of conservatives and xenophobes, both representatives of the ruling class, who have worked to hide its more original meaning. He writes:

Originally home meant the centre of the world – not in a geographical but in an ontological sense … home was the place from which the world could be founded … Without a home at the centre of the real, one was not only shelterless, but also lost in non-being, in unreality. Without a home, everything was fragmentation.

Though expressed in straightforward prose, Berger’s essay slaloms through a conceptual minefield, one that has confused (and intimidated) most thinkers on the Left for at least a century. No other baby has been as perpetually thrown out with the bathwater of politics as has the concept of home – perhaps due to its presumed relation to the ‘national question’ or the desire for property. On each of these scores, Berger drew fundamental distinctions. Along with only a handful of postwar critics, most of whom were refugees, he wanted to acknowledge the atavistic pull that an original home can exert. To long for one is not incipient fascism, but a desire perverted by the ideologies of patriotism and patriarchy.

Though aware of the very real contradictions, Berger would have agreed with Edward Said, who wrote in ‘Reflections on Exile’ (1984) of the ‘unhealable rift forced between a human being and a native place, between the self and a true home: its essential sadness can never be surmounted.’ And yet he would have also agreed with Vilém Flusser, the Czech-Brazilian philosopher, who spoke of the migrant not only as a challenge to the native’s self-centredness but as holding the capacity to enlighten. Flusser, who (like Berger) wrote extensively on both photography and emigration, in ‘The Challenge of the Migrant’ (1985) suggested that the migrant should be seen as a ‘vanguard of the future’, an emissary of a new mystery: not the old mystery of a lost homeland but rather ‘the mystery of living together with others’.

The two groups for whom Berger came to advocate, the Zapatistas and the Palestinians, were both stateless

In Berger’s work, the figure of the foreigner represents promise more than threat. This was true in his first novel, A Painter of Our Time (1958), about a Hungarian émigré in London. It was also true for A Seventh Man (1976), his collaborative account of migrant workers in Europe, and his trilogy of peasant fiction, Into Their Labours (1991). In Flusser’s words, the migrant can be ‘both a window through which those who have been left behind may see the world and the mirror in which they may see themselves, even if in distortion’. Much critical thought has examined those distortions. Said reframes the question, asking how we might ‘surmount the loneliness of exile without falling into the encompassing and thumping language of national pride, collective sentiments, group passions?’ At a political moment that has seen the stunning rise of Donald Trump, Narendra Modi, Jair Bolsonaro, Viktor Orbán – the list goes on – this might be the million-dollar question of our time.

Unlike other social theorists, Berger never tried to reason his way through the contradictions of nation-state or the citizen/non-citizen distinction. He preferred instead to disown any affinity at all with state power. The two groups for whom he came to advocate, the Zapatistas and the Palestinians, were both stateless. Perhaps this was a cop-out – but maybe not. In an otherwise sympathetic review of Berger’s From A to X (2008), Ursula K Le Guin pointed to the absence of political complexity in the novel: the allegorical universalism of its revolutionary lovers effectively ‘exonerated [their people] from bigotry and political folly or factionalism’. The charge of sentimentalism was often levelled against his later work.

In 2007, aged 81, Berger published Hold Everything Dear, about the War on Terror and the global migration crisis. In a phone interview with an Australian radio host, he was asked to directly confront the contradiction that immigrants can put pressure on the native poor, making them ‘nervous and even angry’. Berger drew back. ‘I don’t deny the difficulties,’ he said, but he added that the problems were often distorted by the vested interests of the national press, and by cynicism:

You ask me as though I can find a solution. No, I can’t find a solution in theory like that, of course not. The solutions … we’re not really talking about solutions, we’re talking about finding a way to live, to survive, to perhaps discover forms of mutual aid … All that can only happen in practice, in particular situations in the way that people associate or don’t associate in terms of some small project or in defence of some small thing which is in the area where they live. It’s not for somebody talking on the radio abstractly about the world who will find that kind of solution.

The answer reflected Berger’s distrust of theoretical remedies to human problems. Perhaps even more so, it accorded with his respect for practice and social knowledge. He never tried to gain the ear of power. He was more concerned with everyday gestures and decisions: the choices people either make or fail to make in their own lives.

A choice about a way to live presented itself to Berger shortly after he made Ways of Seeing. He was in his late 40s and had achieved an international level of fame. The invitations started coming in. He could have taken a position at a museum or university. He could have entered a world of sinecures and fellowships, residencies and agents, conferences and airports. He turned down almost all of this.

The reasons were historical as well as personal, and might relate, however indirectly, to our own contemporary impasse: our inability to see more than one generation into the future, the dissolving legitimacy of the metropolitan and academic elite, the seeming incapacity to move beyond a politics of negativity and despair. Just as we are hitting the limits of critique as a culture, Berger was hitting them as a writer – and a person. With Ways of Seeing (and his Booker-winning novel G.), he had reached a tipping point that was also a midlife crisis and a fork in the road. ‘I can be only by destroying,’ Lionel Trilling once wrote of a certain modern attitude, ‘I can know myself only by what I shatter.’ But where is there to go when the demolition is complete?

There is a photograph of Berger from the 1973 Frankfurt Book Fair. Taken by Jean Mohr, a lifelong friend, it shows a middle-aged writer, exhausted and detached, lying on the floor as others walk past him in a blur. What was Berger thinking about? What was he longing for? It was at this fair that Berger met a young American, Beverly Bancroft, then an assistant at Penguin Books. Within a year, they were married. Two years later, they had a son. Soon they moved to a small farming village in the foothills of the Alps. The chalet they rented lacked central heating and running water. The outhouse was across the driveway.

The question, he once said, was of ‘continually learning to be embedded in life’

It would be easy to romanticise Berger’s third act as a rural storyteller. Even while haymaking, he was still a renowned writer with famous friends. But it would be just as easy to cynically write it off. Throughout the neoliberal era, most intellectuals have lived in a social world that is urban, cosmopolitan, cutthroat and status-oriented. Berger went someplace very different. He remained politically committed though his conception of the political shifted and enlarged, absorbing a broader sense of history and experience.

The question, he once said, was of ‘continually learning to be embedded in life’. During the 1970s and ’80s, as Ways of Seeing made the rounds in British and American classrooms, Berger was discovering his own need for roots – what Simone Weil called ‘the most important and least recognised need of the human soul’ – even if they were freely chosen and across the English Channel. Embeddedness, in this way, was about the double anchors of community and place. It required, on the one hand, the help of others – not primarily because of their material aid but because ‘they are real and therefore looking at them, being with them, you become real in that moment’ – but it also required an individual openness to the physicality of the world: the seasons, the rising and setting of the sun, the trees and animals and rain.

How this ontology would map onto urban experience is an open question that Berger never fully answered. How it would map onto digital experience is something we have yet to answer. Yet there is in his late work a kernel of something perhaps visionary. At a time when E M Forster’s humanist mantra – only connect – has come to sound like a slogan for an internet provider, Berger’s more numinous, earthly communions might be the most useful. Ways of Seeing remains the way he came to the attention of millions, and the hinge in his life. His long trajectory after the dividing line of Ways of Seeing still has much, maybe even more, to teach us.

In the conversation with the Australian interviewer, Berger felt compelled, if only for a moment, to leave the sphere of ideas. ‘Now I live here,’ he said of the village where he had settled:

I’m looking out of the window, the sky is grey, it’s got to be about 13 degrees … The hay is getting browner and browner, less and less nutritious, so there will be less and less milk this winter when the cows are fed hay because of the snow outside. So I’m sitting here in front of that window, and now, after all those years, I’m sitting at home …

Source: Ways of living

Authenticity and Normative Authority: Addressing the Agency Dilemma with Values of One’s Own

First published: 01 December 2019

The full text of this article hosted at is unavailable due to technical difficulties.

Source: Authenticity and Normative Authority: Addressing the Agency Dilemma with Values of One’s Own