‘AI and the Future of Humanity’: Night Owls Discussion with Chloé Bakalar 

April 23, 2026
Photo/Chloé Bakalar

On Saturday, Apr. 11, Chloé Bakalar — currently AI ethics lead at OpenAI, the developers of ChatGPT — joined Associate Professor of Political Science Jonny Thakkar at a Night Owls event. The pair, with questions from the audience, discussed the implications of generative AI for the future of humanity and the role of ethics in understanding the intersections of technology and society. 

Night Owls is a running series coordinated by Thakkar designed to platform “philosophical conversations about issues of pressing concern to students and faculty,” based on the belief that “the unexamined life is not worth living.” This event, consisting of a discussion between Bakalar and Thakkar followed by audience Q&A, ran half an hour beyond its three-hour schedule. 

Prior to joining OpenAI, Bakalar was the chief ethicist at Meta, the parent company of platforms such as Facebook, Instagram, and WhatsApp, from 2019 to 2025 and an assistant professor of political science at Temple University from 2018 to 2024. 

As a postdoctoral fellow at Princeton University, where she and Thakkar met, Bakalar co-founded the Princeton Dialogues on AI and Ethics, a research collaboration that sought to “create a pipeline between industry and academia.” As part of the program, Bakalar and interdisciplinary scholars engaged directly with tech companies on the ethical real-world implications of their technology on society.

Reflecting on her path to AI ethics, Bakalar noted her initial interests in “how the ways that we communicate and the different kinds of tools or mediums through which we communicate make us better or worse people.” Given the role of technology in communication, Bakalar was led to AI ethics, which considers “social values, political values, institutional values, and … moral values.” 

Meta approached Bakalar for a new AI Ethicist position not long after she joined Temple’s faculty. While at first very skeptical because of her own ethical stances, Bakalar was persuaded to tentatively assume the role by the prospect of “a chance to fix [issues she saw] or help [Meta] do better at scale,” in terms of its ethical impact.  

In this new role at Meta, Bakalar wanted to challenge the traditional model of AI ethics as primarily a research-only or compliance function by embedding her day-to-day work directly into the engineering team. As a member of the technical team, Bakalar was able to operate closer to those making the programming and technical decisions, which determine the functioning of the AI models. She felt she could have greater impact by being “right on the ground, where [her] interests and [her] incentives were aligned exactly the same as the people who were building the things and … really making the decisions.” 

Given the expansive impacts of generative AI on society, such as on the workforce, online content, and education, Bakalar emphasized the similarities between the recent AI advances with other major technological advancements throughout history. Rather than AI causing the “destruction of all humans,” Bakalar is more concerned about its “deterioration of parts of what we consider humanity.” 

She noted that past technological innovations have involved similar trade-offs between the loss of skills historically possessed by humans with the advantages of powerful tools. Bakalar cited the example of GPS, which provided easier and more accurate navigation, though at the loss of traditional navigational and astronomical techniques and connection with one’s surroundings. 

Given these “trade-offs that change who we are,” according to Bakalar, it is important to be purposeful and considerate of the potential gains and losses to society when making choices about the innovation and implementation of new technology.  

While generative AI may be useful for accelerating time-consuming processes, allowing individuals to reclaim deeper and more rewarding engagements for themselves, Bakalar raised concerns over the limits of this paradigm. She cautioned that the overall balance in trade-offs to new technologies should be understood when making choices about their implementation. 

While Bakalar identified other areas where AI can have positive impacts — such as health care and environmental sciences — she expressed concerns about changes to education. For example, students increasingly use AI to complete assignments, and people overall have decreased attention spans across ages, due in large part to social media, recent trends that generative AI is likely accelerating. 

Cautioning against overreliance on AI, Bakalar argued for the importance of “[working] out your brain” to retain skills, particularly for students whose primary focus is on developing new ones. 

In response, Thakkar explored the possibility of a move towards “reclaiming some of these capabilities for ourselves, and maybe not because they’re strictly necessary, but because there is some … intrinsic value in achieving things for ourselves.” Rather than the loss in figurative cognitive “muscles,” some people may instead place even more emphasis on skills that require more attention and concentration, as “a way of being more fully ourselves, or getting the pleasures of developing one’s own capabilities.”  

While highlighting the uncertainty surrounding the impacts of AI in the future, given its expansive potential benefits and drawbacks, Bakalar added that in the process of both technological innovation and daily life, “The first question you should always ask is, ‘Why am I doing this?’ The second question with AI should be, ‘Why am I using AI to do this?’” 

Bakalar stressed the importance of ethically informed technological development. She describes her work as an AI ethicist as a form of “very applied, practical ethics,” which involves for any given issue “looking at both the normative questions … and also what is actually possible, but technologically feasible and feasible in the world in which we live.” 

Thakkar posed a question about the apparent lack of accountability of corporations to the public, which are often perceived as only seeking to be accountable or democratic for reasons of self-interested financial gain. In response, Bakalar expressed some doubt over attempts to democratize platforms through solicited and potentially shallow user input alone. While generative AI is designed to appeal to the perceived desires of consumers, often through user input tests designed to pick up on people’s expressed preferences, Bakalar voiced concern over the potential for expressed preferences to not accurately reflect the actual preferences of users.   

Regarding to Thakkar’s observation that some chat bots lean towards “sycophancy” — excessive, servile flattery — toward the users, Bakalar noted the tendencies of individuals to push back against overtly flattering responses from chatbots once they realized it was fake. 

“One way or another, we’re going to find stuff out about humans,” Thakkar added, pointing out that the observations of human behavior in response to AI “is like a mass experiment.”

In the case of specific platform features, Bakalar complicated the notion that corporate financial incentives might motivate intentionally harmful design, adding that rather than maliciousness, “the amount of randomness that contributes may be more terrifying.” 

Bakalar also emphasized the inefficiency of the government’s regulatory bodies in imposing significant constraints on companies, due to slow action and relatively low-impact fines, given that large corporations have the ability to outpace the implementation and legal litigation related to legislation. In the U.S., Bakalar noted the unwillingness of Congress to pass AI regulation, contributing to a pessimistic view of the likelihood for external institutional constraints. 

In the Q&A session, several students raised concerns about ethical data use, as well as the significant resources (such as energy and water) consumed by generative AI and data centers. In response, Bakalar stressed the barriers to slowing down resource-intensive growth given “serious collective-action problems” within the industry. 

Constrained by the current political and economic frameworks, namely a “winner takes all” philosophy and capitalist incentives, Bakalar emphasized that the guiding principle in many choices was the belief that “the technology has to come as quickly as possible, so it’s better to do it fast … and if you don’t do it fast, someone can do it faster, and it might be a lot worse.” 

Thakkar further introduced the prevailing idea of “building in a notion … of human supremacy” in the design of AI models. Building on the concept of the operationalization of ethics in AI, where ethics are made concretely “operational” by building values and principles into the models themselves, Bakalar expressed that “there’s no one way of operational ethics, because ethics instantiates itself in so many different forms.” 

Bakalar cautioned against adopting a single ethic, instead emphasizing a pluralistic approach in which, by including “a lot of different values that the model understands and internalizes … you can teach that there are lots of different ways of engaging in decisions related to those values.”  

Informing this pluralism is Bakalar’s belief that her role is to help others “understand that their perspective on what’s good is not the only perspective on what’s good, and that some things might actually be just bad.” 

Additionally, Bakalar added that fostering individual moral responsibility and critical thinking towards their ethical obligations includes the recognition that “non-choices are [still] choices.” Thakkar concluded that therefore, “to embed ethics means to embed reflection.” 

In arguing that “intelligence and morality are [not] particularly well correlated,” Bakalar pushed back against the idea of the possibility of a “supermorality”: while “we can make a super intelligent AI … that does not mean that it will be good at making moral decisions.” 

Bakalar cautioned against “granting moral agency” to AI, which risks “minimizing … human responsibility and agency.” 

“We do better when we go a little bit more slowly,” Bakalar added. This way, “we have the opportunity to check for scary stuff, and we have more of a chance to think about the question of, ‘Why are we doing this?’”

1 Comment Leave a Reply

  1. “In the U.S., Bakalar noted the unwillingness of Congress to pass AI regulation, contributing to a pessimistic view of the likelihood for external institutional constraints.”

    I wonder whose employer has been lobbying against regulations! More broadly, I wonder what types of people (CEOs, billionaires) have dismantled the regulatory apparatus in the US and elsewhere. It’s also reductivist to blame this on “Congress” when you have people like Bernard and AOC, who are in Congress (and, coincidentally, not lobbied by OpenAI, etc.), attempting to introduce regulatory legislation (https://www.sanders.senate.gov/press-releases/news-sanders-ocasio-cortez-announce-ai-data-center-moratorium-act/).

    “Bakalar cautioned against adopting a single ethic, instead emphasizing a pluralistic approach in which, by including “a lot of different values that the model understands and internalizes … you can teach that there are lots of different ways of engaging in decisions related to those values.”

    Models don’t understand anything. In theory some type of AI could, but LLMs do not and never will. LLMs do not grasp meaning. They could spit out an argument for why racism is unethical (or perhaps a “pluralist” take that racism is *mostly* unethical), but they do not understand why racism is unethical.

    “In arguing that “intelligence and morality are [not] particularly well correlated,” Bakalar pushed back against the idea of the possibility of a “supermorality”: while “we can make a super intelligent AI … that does not mean that it will be good at making moral decisions.””

    Morality is a product of intelligence. Humans exhibit a capacity for moral reasoning and creating moral structures. Do butterflies? Do sharks? It’s baffling to me to say that intelligence and morality are not particularly well correlated.

    But we can go ahead and get controversial about this. Why not. Among humans, when we think of intellectually rigorous ways of Being and Becoming, and when we think about, say, the educational demographics, conspiratorial beliefs, or credulousness toward an obvious chaotic evil charlatan in the form of Trump of MAGA vs. non-MAGA, can we correlate morality and intelligence? I believe we can. Would this hold for a super intelligent entity? Who knows.

    Not that we can make a super-intelligent AI; we can’t. We cannot even make a normal-intelligent AI. We can make simulacra of intelligence, but even that claim is suspect, as we lack a coherent definition of intelligence. Instead, we have ceded authority on the issue of what qualifies as intelligence to the makers of commercial AI products, and tacitly accept their products’ ability to perform well on a battery of tests as a sign of intelligence. But it is not. It is, at best, a sign that you may not be looking at a great job market when you graduate, as wage-slashing mangers can be talked into deploying AI to do tasks previously assigned to introductory level employees. (I hope I’m wrong about that last point. We’ll see how things develop.)

Leave a Reply

Your email address will not be published.

Previous Story

Ask the Phoenix: Who Decided on the Worthstock Artists?

Next Story

SEPTA and Swarthmore Borough Begin Discussing Commuter Lot Development

Latest from News

Previous Story

Ask the Phoenix: Who Decided on the Worthstock Artists?

Next Story

SEPTA and Swarthmore Borough Begin Discussing Commuter Lot Development

The Phoenix

Don't Miss