Technology

Hope, Hype or Horror? ‘AI Doc’ Director Charlie Tyrell Asks What’s Next

I write (and think) about AI for a living. In any given 30-minute period, I vacillate between worrying that AI will destroy everything I know and love, and believing — or at least wanting to believe — that it can change humanity for the better.

Dread turns into hope, which turns into ambivalence, which turns into fear-induced cynicism. Rinse, repeat. Hello, my central nervous system needs a rest.

That debate is at the heart of a new documentary that hits theaters today, March 27. AI Doc: Or How I Became an Apocaloptimist (104 minutes) premiered at Sundance in January and later screened at SXSW. The film explores the wild industry and the mind-melting world of artificial intelligence. It takes a consistent look at the tension between those who feel extremely abandoned versus those who feel extremely optimistic about the AI ​​boom, and how to make sense of that divide.

The AI ​​Atlas

The two directors of the film, Daniel Roher and Charlie Tyrell, had just become fathers when the film was made, their children being born a week apart. From the perspective of fatherhood, the documentary uses hundreds of interviews, both on and off-screen, with key technology and risk experts around the world — from OpenAI CEO Sam Altman to Dan Hendrycks, executive director of the Center for AI Safety — to explore whether AI is the biggest threat we’ve ever known, or the most exciting technology we’ve ever known, or something else.

Roher won the Academy Award for Best Documentary Feature for Navalny (2022), and Tyrell was on the Oscar shortlist for his short documentary My Dead Dad’s Porno Tapes (2018). AI Doc was also produced by the teams behind Everywhere All at Once (Daniel Kwan and Jonathan Wang) and Navalny (Shane Boris and Diane Becker).

I spoke with Tyrell this week, ahead of the theatrical release, to discuss fatherhood, two and a half years making this film, inspiration, goals and the future of society with AI.


The discussion below has been edited for length and clarity.

I know you’ve made documentaries before, but how did you prepare yourself, going from a short personal documentary to a documentary like this, looking at something much bigger, impactful AI?
Tyrell: I mean, there was no preparation. Daniel Roher is the one who brought me to this film, and I don’t remember how many features he had done before this, but more than me. And it was just trusting each other. And not only for Daniel Roher, but for the whole team that will face it together and kind of say, “There’s no need to have a plan, we’ll make a plan as we go.” And not necessarily being brave about it, but just knowing that we have a job to do and a goal, and keep pushing forward towards that.

So how did I get around? Just by believing in the people around me. Since I’m from just short before this, I’m still trying to apply my many emotions and POV to this story. It’s after I became a father, and I became a father the same week Daniel did. So a lot of his feelings were my feelings, and vice versa.

I was really touched by the lens of fatherhood. It was soft and surprised me a bit. Was that a natural process, or did you know going into Daniel that it was going to be a frame?
Tyrell: It happened automatically, but also early in the process. I think it was in our first or second group meeting with Dan Kwan and Jonathan Wang and Shane Boris that it was presented as an idea of ​​how we could do this. And we started kind of entertaining it out of the gate.

And he said Daniel was the one who brought it. Do you think your future shared fatherhood was part of that?
Tyrell: Absolutely. I don’t remember if this project came up before or after we knew each other’s kids around the corner. But definitely. I lean toward humility, and I believe Daniel does, too. So it was nice to have a companion when you knew you were going to go through something like a behemoth of a feature film, to get a behemoth of a title like AI. And knowing, “OK, I’m going to go through this one big thing in my life having a baby,” and, “OK, someone else is going to share what’s going on a little bit.” I was relieved to know that.

Yes, you are afraid of “how will I be able to navigate my work with a child?” And just knowing that I couldn’t do it alone gave me a sense of security. And actually, my kid is in the movie a few times. There are subtle frames and moments there.

In an interview with CBS, he said the goal was to democratize AI. Who do you think is really benefiting from current AI developments, and who is being left out?
Tyrell: Well, one of the first people to benefit will be the technology industry, and these valuations that happen to their companies at these, in some cases, absurd, unheard of prices. It makes many people very rich, and it makes many people very powerful. So that is one of the first to benefit.

Then there are people who do not gain anything. Talking to data centers, people lose some of their essential resources, like water. Some people are displaced from their homes because of these data centers. I’m mainly talking about the Western world and North America and the United States specifically. It is tricky and surprising sometimes to follow the end of this technology … In this field, there are spaces in the world where there are people looking at screens and uploading and downloading data. [to train AI]and some are horrible things to look at. There is still someone investigating what happened [data sets] and being exposed, in some cases, to bad and bad things — and not being paid very well to do them.

Was there a particular idea that stood out to you during the making of this documentary? Is there one person who just had the tone to say that he really stuck with you?
Tyrell: The film, including the experience of making it, was really a chorus of voices. But the one that really stood out to me, off the cuff, was Deb Raji [a computer scientist and researcher at UC Berkeley, specializing in algorithmic auditing]. He was really able to talk about the ways in which this technology is being used, at the speed that it is, regardless of the goal that it might have. Right now, today, there are people who become victims due to technical errors. There are people who end up spending a weekend in jail because AI-powered facial recognition software misses a certain person and confuses them with the person who committed the crime.

As this technology is used in things like mortgages and loans and that kind of office stuff that people need to live — it has to go well and go well, because their lives and their health and their stability depend on it. These programs are not a person who has anything like compassion. They are binary systems that will ultimately give yes/no, without much room for reversal, because we take it as data and absolute truth. So people are affected by that.

Daniel is doing an interview [with Deb Raji]and I was very close as a viewer, but I was really surprised by a lot of what he had to say because it took me out of my kind of bubble that I live in. And one thing he says is that, if you feel that the negative effects of this technology will not affect you because of your place in life or your right, it is only a matter of time. Because it just keeps growing.

I felt very visible at times during this documentary because every day, I would scroll through like, “AI is going to destroy everything.” And I said, “No, it’s going to be fine. We’re all going to be fine.” Humanity has gone through significant shifts before, and we’ve done OK. Were there times when your vision of AI flipped back and forth? How often does that happen?
Tyrell: The whole time and it continues until now. And that is true of this technology. It’s both things at the same time. One of the messages of the movie is that this will have amazing abilities, and these awesome abilities. And in order to use it, we need to accept and understand that this is what it will be. We cannot believe that it will only be good, or only bad, because it will always be both.

Was there a target audience for this? Because I live and breathe AI ​​and I think about it all day, every day, but I loved this documentary, and it taught me things. Did you make it so that this would be more for people who have a vague idea of ​​what AI is, or was it for everyone?
Tyrell: What we were fighting for here was the beginning, the first date in the technology. And by that, we can say that the audience was people who were probably not interested or willing to engage with this technology or this landscape — people who were probably content to ignore it. We wanted to make an interesting film that would be engaging but also educational. Very strange topic. I personally find that when I am overwhelmed with information, I sometimes want to shut up and look away. Like, I shouldn’t have another problem to deal with in my life, right? That is normal human nature for most people.

We wanted to make the film accessible, and, in a way, be a starting point for many people, a starting point for people’s conversation. And by that, I don’t mean that we’re cutting too much of any of it or being overly simplistic, but it’s meant for a general audience. It is designed to meet most people where they are when it comes to this technology.

Are there questions about AI that you wish more people would ask?
Tyrell: As for the people who use it, I hope there will be more enlightenment on the use of power to create a silly image of yourself in a different context and setting. I wish there was more transparency or metrics on: “To make this image, this is the amount of water you used, or this is the energy you used.” And when people see that, maybe they’re still trying to find the perfect image of them as a centaur or something, but maybe instead of trying 50 attempts to find the right one, they’ll cover it with a couple. That would be something I’d like to see baked into some of the model meetups.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button