Running to the Noise, Episode 23
Artificial Intelligence: Tom Dietterich 鈥77 on the Promise and Pitfalls of Machines that Learn
Artificial intelligence has evolved from an abstract concept into one of the most transformative forces of our time. When Tom Dietterich graduated from 91直播 in 1977 with a degree in mathematics, AI was still largely theoretical. Over the decades that followed, his pioneering research helped turn theory into reality.
A distinguished computer scientist and one of the early architects of machine learning, Dietterich鈥檚 work laid the groundwork for the algorithms that now drive everything from voice assistants and climate models to medical diagnostics and drug development. Tom鈥檚 work has made him a sought-after authority. He advises the U.S. government on AI technologies and has earned some of the field鈥檚 top honors, including the Award for Research Excellence from the International Joint Conference on Artificial Intelligence鈥攁 career achievement shared by just 25 scientists since 1985.
In this episode of Running to the Noise, 91直播 College President Carmen Twillie Ambar sits down with , Distinguished Professor Emeritus at Oregon State University, to explore both the promise and the pitfalls of artificial intelligence. Together, they trace the evolution of AI from its beginnings to its current influence across nearly every industry, and discuss how a liberal arts education uniquely prepares us to ask not just what can AI do, but what should we do with it?
From the environmental impact of large-scale computing to the creative and ethical questions facing artists and educators, Dietterich offers a nuanced, hopeful, and deeply human vision for how we can shape the future of intelligent machines.
This isn鈥檛 just a conversation about technology; it is a reflection on curiosity, ethics, and what it means to stay human in an age of algorithms.
What We Cover in this Episode
- The origins of machine learning and how early innovators taught computers to 鈥渓earn.鈥
- The environmental and ethical implications of AI and how efficiency and innovation can coexist.
- Why AI鈥檚 biggest challenge is not what it can do, but what humans choose to do with it.
- How a liberal arts foundation fosters critical thinking, ethics, and responsible innovation.
- The promise of 鈥渃omputational sustainability鈥 and AI鈥檚 role in addressing global challenges.
Listen Now
Carmen Twillie Ambar: I am Carmen Twillie Ambar, president of 91直播. Welcome to Running to the Noise, where I speak with all sorts of folks who are tackling our toughest problems and working to spark positive change around the world. Because here at 91直播, we don鈥檛 shy away from the challenging situations that threaten to divide us.
We run toward them.
When Tom Dietterich earned his math degree from 91直播 in 1977, artificial intelligence systems couldn鈥檛 learn or adapt. They simply followed rules. Over the next few decades, Tom helped change that. He became one of the pioneers of machine learning鈥攖he technology that allows AI to make sense of data, spot patterns, and make decisions.
Today, it powers everything鈥攆rom Siri on your phone to the algorithms planning your commute, recommending your next podcast, or helping doctors develop targeted medical treatments. Tom鈥檚 work has made him a sought-after authority. He advises the U.S. government on AI technologies and has earned some of the field鈥檚 top honors.
His research has shown how data and algorithms can solve real-world problems: improving wildfire management, protecting endangered species, enhancing agricultural productivity, and advancing drug development. When machine learning technologies leapt from the lab into daily life, Tom鈥檚 liberal arts education helped him step back and ask the bigger questions.
How will AI shape our institutions? How will it affect our understanding of ourselves? And what ethical responsibilities come with creating machines that learn and make decisions that increasingly impact our lives? In this episode, I speak with Tom, distinguished professor emeritus at Oregon State University, about the power and the peril of artificial intelligence鈥攁nd why the real challenge of AI isn鈥檛 what it can do, but what we choose to do with it.
I鈥檓 so excited to have Tom Dietterich here today because we rolled out a year of AI exploration at 91直播鈥攁 campus-wide initiative that includes speakers, workshops, and opportunities for people to think about the impact of AI in all the work they do. Obviously, we鈥檙e thinking about environmental and privacy impacts, but also how this technology might reshape higher education鈥攚hat things we can take from it that we feel really positive about and what things we should be concerned about.
I鈥檓 really excited to have this conversation with you because I think it fits within that AI theme, and I鈥檓 hoping you can help us think more clearly about this technology and what it means. Tom, I guess I wanted to start with whether you think it鈥檚 right to describe you as one of the founders of AI or machine learning. How would you want people to think about your early contributions to this technology?
Thomas Dietterich: I鈥檓 not a founder. I鈥檓 probably one of the first graduate students who worked in artificial intelligence. Generally, we date the founding of AI to 1956鈥擩ohn McCarthy, Herbert Simon, and people like that. And I was only two years old at the time, so I was not at that first workshop in Dartmouth.
But starting in 1978鈥攐r fall 鈥77, right after I graduated from 91直播鈥擨 began a PhD program at the University of Illinois and later moved on to Stanford to finish my PhD. At that point, the field of machine learning was first given its name and started to have regular meetings. There was a workshop in 1980鈥攖hirty people attended. This year, I think the big AI conferences are having 15,000 to 17,000 people attending. So it鈥檚 been quite a ride.
Carmen: I鈥檓 wondering if you can help us understand what those early days were like. I think to myself鈥攚hat were the courses? What were you trying to accomplish in those early days as you were thinking about machine learning and its possibilities? Take us back to those early moments.
Tom: Of course, whenever you鈥檙e programming a computer to do something, you need to somehow specify what you want the computer to do and how to do it. Ideally, you want to provide a step-by-step recipe, and we write that in a computer programming language, of course.
But when people started to think about things like speech recognition, language translation, or maybe controlling a robot, we quickly realized that we had no idea how to write down the step-by-step process by which our eyes see that鈥攐h, that鈥檚 a dog running across the quad鈥攐r that we hear something and recognize, oh, that鈥檚 a Bach prelude.
It happens in a part of our brains that we cannot introspect into. How are we supposed to write those programs? So one thought was maybe we could teach the computer鈥攓uote-unquote. I always hesitate to use these human-loaded words鈥攂ut to provide examples of what the input should be and what the output should be, and then see if we could write a program that could find patterns in the input that would tell it how to produce the output.
That was the idea of machine learning: trying to fit a function, or mathematically, find a mapping between inputs and outputs.
Carmen: As I was preparing for this interview, I read that you mentioned when your team was teaching robots to move, you were literally falling and trying again鈥攖hat was how you thought about the work, trying to teach the computer to 鈥渓earn how to learn.鈥 Maybe that鈥檚 not quite the right phrase?
Tom: No, it鈥檚 just that for some tasks鈥攍ike language translation鈥攚e can have a sentence in Spanish and the corresponding sentence in English because a human translator has created it. We sometimes call that supervised learning鈥攊t鈥檚 giving the right answer.
But it鈥檚 much more difficult in many tasks, and robotic motion is one of them, although I personally haven鈥檛 worked on that so much. I鈥檝e worked on the underlying mathematics and algorithms. Another example is in biological conservation, where you鈥檙e considering what steps could we take to prevent an invasive species from taking over.
There鈥檚 no human expert that knows the right answer, so you can鈥檛 just tell the system, 鈥淲hen you have this situation, here鈥檚 what you should do.鈥 Instead, what we tend to do is build a simulation of some kind, and then have the computer test out different strategies in simulation, figure out which ones work best, and then learn from those. That鈥檚 known as reinforcement learning鈥攐r, technically, bandit feedback, which comes from the idea of a slot machine being a 鈥渙ne-armed bandit.鈥 You can only learn what payout you get from a machine by pulling the lever.
Carmen: So by pulling the lever multiple times, you get different answers, but ultimately you arrive at the right answers through continual experimentation鈥攊s that the concept?
Tom: Right, yes. Of course, with robots, it鈥檚 extremely difficult in the physical world to have them learn by trial and error, so we usually do that first in simulation. One of the big advances over the last 30 years is that we have much better physical simulators that can simulate the mechanics and friction and everything that the robot would experience. The computer can do millions or billions of simulated experiments and come up with a strategy for walking, let鈥檚 say, that鈥檚 pretty good鈥攁nd then you test that out in the real world.
Several years ago, Sony came out with this robot called the Aibo鈥攊t鈥檚 the little puppy. Colleagues of ours at the University of New South Wales and Carnegie Mellon wore out the hip joints of their robots testing different ways of walking across carpeted floors and had to give them hip replacements as a result.
Carmen: So I was wondering if you could talk a little bit about one of the concerns people raise鈥擨鈥檓 sure you鈥檝e heard many鈥攁round environmental issues, ethical issues, all the things that people bring up in relation to AI. Maybe you could take just one of those, like the environmental impact, and give us your perspective. Should we be concerned? What should we be concerned about, and how should we analyze our progress toward addressing those concerns?
Tom: Let鈥檚 first talk about the environmental impact. I think that鈥檚 actually the most tractable of the questions. Over the last few years, there鈥檚 been a huge investment of capital in trying to scale up large language model systems.
Back in the early 2020s, this approach for building large language models鈥攌nown as the transformer model鈥攚as invented at Google. The initial results were intriguing, so people thought, 鈥淢aybe we should just train them on a lot more data,鈥 and that led to GPT-3, then GPT-4, GPT-5, and all their siblings and competitors.
But there was very little attention paid to making this efficient. Everyone was just curious what kind of behavior we could get out of these systems鈥攈ow they would act. I would characterize this as the brute-force period. People threw huge amounts of computing power, electricity, and vast quantities of water to cool the data centers. It was very inefficient.
Our colleagues and competitors in China, not having access to such high-end machines, were forced to innovate鈥攁nd they immediately achieved 20-times more efficient methods. In the meantime, NVIDIA, which makes the majority of the chips used to run these systems, has been increasing the efficiency of their chips at a very rapid pace.
So there鈥檚 a huge motivation from the companies spending billions of dollars running these systems to make them more efficient, cheaper, and less energy-intensive鈥攂ecause it鈥檚 costing them real money. And it鈥檚 not clear they鈥檙e making that money back yet, since the technology is still so immature and not being used as widely as they鈥檇 like.
So I think the problem of environmental impact will, to some extent, solve itself鈥攂ecause the incentives will be there.
Carmen: The incentives are very powerful here, right?
Tom: Exactly. We have a lot of expertise in computer science, electrical engineering, and power systems that we can bring to bear. We know how to make these things run faster. The question is, what happens if demand increases exponentially?
We could end up with something like adding lanes to a freeway鈥攚here more people get on the freeway and traffic gets worse. We鈥檒l probably see some of that, particularly if the technology turns out to be as useful as people claim it will be.
But I think there鈥檚 a deeper question: is there a better way of building AI systems that doesn鈥檛 rely on massive amounts of training data and immense models?
I feel like the machine learning field is in a moment of crisis鈥攊n the sense of what Thomas Kuhn describes in The Structure of Scientific Revolutions. A scientific field proceeds through routine experimentation until things just don鈥檛 seem to be working.
As a machine learning researcher, I could never have imagined we鈥檇 have this much capital investment. We鈥檝e scaled up these systems massively, yet we鈥檙e finding they still don鈥檛 do what we want them to do.
The most famous issue, of course, is hallucination鈥攕ystems making things up. The original term came from language translation, where a translation would mention something that didn鈥檛 appear in the source text鈥攊t was just something very likely to have been there. In computer vision, you might ask a computer to describe a picture of a bathroom showing a sink and towel rack, and it might say, 鈥淭here鈥檚 also a mirror there.鈥 It鈥檚 not in the image, but probably in the room.
That鈥檚 where the term hallucination came from鈥攊t鈥檚 imagining things that aren鈥檛 there. But more broadly, we鈥檝e seen over 150 legal cases where documents submitted by lawyers cited non-existent law cases. There are also made-up academic citations in scientific papers.
Another problem is that these systems don鈥檛 seem able to learn abstractions that generalize. It鈥檚 much closer to memorizing billions of facts and interpolating between them, but not going far beyond the data.
A good example: you can train a computer on multi-digit multiplication problems鈥攖wo digits, three digits, up to nine digits鈥攂ut no matter how many examples you give, the models can鈥檛 handle problems that are much larger. If you train them on eight-digit multiplication, they can鈥檛 do sixteen or thirty-two digits.
They鈥檙e not learning the general rules. What鈥檚 funny is you can ask them what the rules are, and they鈥檒l tell you鈥攂ut they can鈥檛 actually use that knowledge. You can also ask them the rules of chess鈥攂ut they can鈥檛 play chess.
That鈥檚 evidence we may need a fundamentally different way of building AI systems than what we have now.
Carmen: I think that鈥檚 so fascinating because one of the things that鈥檚 been happening on our campus鈥攖here鈥檚 been a lot of concern about AI technology replacing people鈥檚 jobs, eliminating the work that creatives do. On the faculty side, people are certainly wondering whether students will lose the same critical thinking skills and cultural competency skills because of their use of AI.
So I鈥檓 interested in your perspective about this notion of AI replacing human thinking. You鈥檝e talked about some of its limitations and the things it really can鈥檛 do鈥攁nd maybe the model will be built in a way that changes that鈥攂ut I鈥檓 wondering whether you have that same level of concern about how AI might eliminate industries and the kind of human involvement in the work that we do.
Tom: Well, absolutely, I share those concerns. Where to begin鈥攚riting, thinking, and knowing how to construct a logically sound argument is an absolutely essential skill, and something that we have to teach in our universities all the time. We do that mostly by teaching writing, and like every academic, it鈥檚 only when I sit down and try to write out my arguments clearly that I start to see all the holes in them.
I think it鈥檚 absolutely essential that we do that. Now, the question is鈥攑articularly in the earlier courses where we assign writing assignments that are fairly routine鈥攕ome of the large language model systems have read all of these previous essays on the same topics and can generate a new one that鈥檚 just another version of those.
I think we have to ask whether those exercises were actually helping students learn to think, even in the old days. They鈥檙e certainly now going to be able to outsource that thinking to these models.
Carmen: So are you asking us to rethink how we approach the academic experience and what we鈥檙e trying to teach students in light of this new technology? Are you suggesting that we still want to teach the same things鈥攍ike critical thinking, understanding how an argument is structured, what it means to have evidence, whether that鈥檚 evidence in a philosophical argument, a scientific argument, or a mathematical one?
Tom: A weakness of these large language models is that they don鈥檛 understand what an argument is or what evidence means. So we still want to teach that. But the question is how. If it turns out that students can produce a sort of simulation of having thought through a problem and we can鈥檛 tell the difference, that means that we weren鈥檛 doing it right to begin with, in some sense.
Carmen: And you鈥檙e saying there鈥檚 probably a higher-order level of teaching that we need to do so we can help students get to that next level of inquiry and understanding.
Tom: I mean, of course, as teachers, we set tasks that we hope have strong pedagogical value for our students. Maybe we need to do more in-class writing. Maybe we need to do more human-to-human paired writing. This is where our creativity as teachers and as students needs to be brought to bear. But I totally grant鈥攊t鈥檚 a challenge, and a lot of our old pedagogy, at least our techniques, will have to be replaced. And that replacement is likely to be more expensive.
Carmen: One of the reasons why I wanted to do this year of exploration of AI at 91直播 is because we can鈥檛 ignore this technology. It鈥檚 here. We鈥檇 be doing our students a disservice not to help them understand it and think about it. And if we need to shift how we do our work in order to help better prepare our students, this year of exploration is about figuring out how to do that鈥攂ecause turning a blind eye to AI isn鈥檛 going to help us solve the challenges.
One of the things I really appreciate about what you鈥檝e said about your own experience being here at 91直播 is how few computer scientists have a liberal arts background that has informed the way you鈥檝e thought about this technology. It鈥檚 impressed upon you the importance of thinking about its ethical impacts in ways that, I assume, might not have been as clear for you if you hadn鈥檛 had the type of background you had at 91直播.
Tom: The most important class I took at 91直播 was on political philosophy with Harlan Wilson. It was an extremely popular class, especially for more scientifically oriented students. In his class, he said, 鈥淜arl Marx claimed to be a scientist鈥攔ead Marx and critique his thinking. And also go read Thomas Kuhn鈥檚 The Structure of Scientific Revolutions for a glimpse at what was then contemporary philosophy of science.鈥
It was really that introduction to the philosophy of science that led me, in graduate school, to think about machine learning. The goal of machine learning is, in some sense, to automate scientific inquiry. The central question of the philosophy of science鈥攐r one of them鈥攊s: how do we acquire new knowledge and learn new things about the world in a systematic and rational way?
How do we go from our common-sense understanding of the world to quantum mechanics through a series of rational steps? I thought that was a fascinating question worthy of study, and that really motivated me to do more reading in the philosophy of science after I left.
But the other thing, of course, is that one of the things you鈥檙e taught at 91直播 is鈥攊t鈥檚 complicated. Basically, everything鈥檚 complicated. You take classes in history, economics, sociology鈥攁nd it鈥檚 just drummed into you that if anybody proposes a simple answer, it鈥檚 got to be wrong.
Carmen: One of the ways I describe the experience here is that students learn how to analyze complexity.
Tom: Well, or at least you know it鈥檚 there. You may run away from it if you can, but engineers often emphasize seeking out the part of the problem that can be solved by engineering methods鈥攂y applying physics and chemistry and so on鈥攁nd focusing on that, while trying to ignore the larger sociotechnical concerns.
That was the world I lived in. I would say for the first ten years of my career, I was having a good time solving little mathematical puzzles in my field, doing normal science. But I was a little disappointed that they weren鈥檛 leading to any real-world impact. Around 1996, when I was first promoted to full professor, I thought, you know, it鈥檚 time to branch out. So I started collaborating with ecologists here at Oregon State University.
Carmen: Was that when you were modeling migrations?
Tom: I didn鈥檛 get to bird migration for another five or six years. I was first working on global vegetation models, because the global climate models at that time only modeled the atmosphere鈥攁nd the Earth was just considered a flat surface. The question was, how about if we bring all of plant life into these models鈥攚ould that help?
I worked with collaborators here who were doing that, and then I moved on to some other things. Eventually, I got connected with the Cornell Lab of Ornithology. They have a big citizen science project called eBird鈥攐ne of the oldest鈥攚here bird watchers fill out checklists every time they go birdwatching. The question was: could we actually do any science with this so-called citizen science data?
One of the things we attempted was to predict bird migration and forecast when it鈥檚 going to happen鈥攂ecause then we might be able to make some interventions, particularly turning off lights in skyscrapers or other sources of artificial light, because that really confuses the birds.
Carmen: Could you talk a little bit about those parts of your research project? I know you鈥檝e done some work on fires and the liability standards associated with wildfires. Maybe you could help our audience understand how AI plays a role in that type of research. Some of what concerns people is they can only see the things that make them fearful about what this technology may be. They鈥檙e not as well versed as you are about some of the ways that this technology might help us do right by the planet, do right by the world. So maybe you could help our audience understand some of your research in that context.
Tom: It is important to realize that there are many, many branches of artificial intelligence. It鈥檚 a banner that is quite confusing. Large language models are really just a small part, although they鈥檙e sucking all the oxygen out of the room right now. But an area that I worked in with wildfire and also with invasive species is really the problem of management. So in wildfires, invasive species, and really in agriculture generally, you can think of it as trying to manage an ecosystem鈥攎ake sure that it鈥檚 functioning well, that it鈥檚 not going to go extinct, and so on.
We end up with a problem very similar to the robot problem we were talking about before. We don鈥檛 know the right way to manage these systems. We usually build a simulation first. In the case of wildfire, we were able to repurpose and combine several big simulators that already existed鈥攑articularly something called FARSITE from the U.S. Forest Service in Missoula, Montana鈥攁nd then simulate different management strategies.
Now, in that case we were interested in the legal question of what kind of liability rules should apply to wildfire. Right now, the usual standard is: let鈥檚 say we have two different landowners who own adjacent property and a fire starts on, say, Alice鈥檚 land and burns into Bob鈥檚 land. Does Alice have any obligation to pay for the damage that Bob experienced, or not? Currently the answer is no. Wildfire is treated as an act of God; everybody just has to deal with their own property.
But there was a proposal to have a liability rule in which Alice would have to reimburse Bob for his losses. There was also the thought that if Alice had taken steps to try to reduce fire risk on her property鈥攕ay, reduced the load of accumulated fuels in our forests here in the West鈥攖hen maybe she should not be liable, so we could have some incentive for her to behave differently.
So we compared various rules. What we found was the liability rule that said Alice would be responsible for Bob was a disaster. Where does the AI come in? In simulation. We simulate what would happen with various wildfires and simulate what Alice and Bob would do鈥攚hat would be their response? Each of them has some objectives they鈥檙e trying to achieve. In this case, we were looking purely at economics.
What we found is that fire is so rare that the optimal behavior is to do nothing, because chances are a fire won鈥檛 start on your property. And if you鈥檙e hurt by it, someone else will pay for it, so you have no incentive at all. This was really applied economics, in some sense. We found that if you had a requirement that you would have no liability if you had brought your property up to a minimum standard, that was better and it altered behavior鈥攁lthough we still found that your risk was much less predictable under the liability scheme than under the scheme where you just have to take care of your own property no matter what.
Which makes sense: if you have to take care of your own property, then you just have to live with the risks and you鈥檒l do things. If there鈥檚 a chance that someone else will pay for it, then you could have a much bigger or much lower risk, and the variability goes way up. In some sense, it鈥檚 a less desirable state of affairs. But in economics you talk about the social optimum鈥攊magining one decision maker could control the entire economy. What would be the outcome? We found that we could approach that with this liability standard.
Carmen: Interesting. I do think there is this sense of AI as this kind of amorphous, probably evil thing that has no benefit to society. You seem to be saying that ways we can do large-scale simulations can help us think about policies that can solve complicated problems for us in ways we may not be able to do if we didn鈥檛 have AI as a framework for thinking about this work. I guess one other question: 91直播 has probably almost one third鈥攏ot totally, but getting close. We have 500 students in the Conservatory. We probably have 300 students in what I would describe as the practicing arts鈥攕tudio art, painting, theater鈥攏ame the practicing arts. There鈥檚 a lot of concern in the creative space, particularly about the impact of AI on artists and their ability to have their work used and not appropriated in the wrong way, or the ability for AI to create whole movies and storylines. What would you say about that space in AI?
Tom: It鈥檚 a fascinating set of questions. It looks like existing copyright law does not really protect artists in the way we might want them to be protected. Of course, there鈥檚 a long tradition in art of making copies of the work of other artists as a form of practice and learning. But if you try to sell those, that鈥檚 generally considered forgery.
I think we maybe need new law that says not only the exact previous expression鈥攖he exact painting, the exact movie, whatever鈥 is protected, but in some sense the style is deserving of some protection. That鈥檚 going to be very hard to define, but the style of the artist should have some protection. If I can walk into your studio, take pictures of everything, and then produce 10,000 works in the same style, at a minimum I鈥檓 going to reduce the price anyone will pay. That just seems like a clear case of theft.
Right now, the law says if I鈥檓 trying to pass them off as your work, that鈥檚 illegal. But if I just say, 鈥淚 like the style, so I made a bunch under my name,鈥 that鈥檚 considered okay. I don鈥檛 think that鈥檚 okay. I鈥檓 less worried about AI systems, in a very simple push-button way, generating whole movies or news stories鈥攁t least with the current AI technology. It鈥檚 very statistical and exhibits this regression-to-the-mean phenomenon. It generally produces things that are pretty boring and vanilla鈥攏o offense to lovers of vanilla.
In fact, there鈥檚 going to be increasing economic or artistic value placed on original expression because the AI stuff is not going to be good enough. Now, AI in the hands of a good artist could become even more interesting. I鈥檝e already seen interesting pieces. I follow a couple of artists on social media who use AI tools to design ceramics, which they then build and manufacture, and they鈥檙e really fascinating. There鈥檚 space here, and it will be very interesting to see how that unfolds.
Carmen: I think I heard someone say AI鈥檚 not going to replace the human endeavor, but humans who don鈥檛 understand how to use AI and its impact will have challenges. What do you think?
Tom: I feel like that鈥檚 too much of an AI-booster statement. People will be able to make fantastic art without knowing anything about AI. But some people will choose to鈥攋ust as digital artists already use computer-based tools. What will be interesting is what they do with those tools and how they bring their humanity to it. I鈥檓 excited to see where that takes us.
Carmen: I certainly know we鈥檙e going to have to figure out how to think about AI on college campuses in all sorts of ways, and its impact can be wide-reaching. I鈥檓 excited about this year of exploration because I think it will help us come to better conclusions about our own thinking, and also the impact on the next iteration of teaching, learning, and careers. That鈥檚 one of the things I鈥檓 hoping to accomplish this year. So, could you talk a little about something your colleagues call 鈥渃omputing for a better world and sustainable future鈥?
Tom: I co-led two very big grants on what we call computational sustainability, and I鈥檝e been involved in some other AI-for-social-good efforts.
Carmen: We haven鈥檛 heard people talk much about computational sustainability鈥擜I for a better future. It seems a little lost in these conversations about AI. Could you help our audience understand what those conferences have been about and their purpose?
Tom: This area of computational sustainability was developed by Carla Gomes and myself鈥攕he led the project. She鈥檚 a faculty member at Cornell. Our vision is: how can we develop methods in computer science to promote sustainability in the natural environment and in our economic and social systems, within the Sustainable Development Goals framework of the United Nations?
We鈥檝e found a wide variety of projects under this umbrella. Some have been like the bird migration or invasive species management problems. There鈥檚 been work on law enforcement for anti-poaching efforts鈥攈ow do you model where poachers are going and predict where they鈥檒l be using machine learning? How do you design the routes forest wardens should take so they鈥檙e very unpredictable and have a higher probability of catching the bad guys?
We鈥檝e also looked at materials science. Some of the biggest potential benefits of AI are going to come from applications in new drugs and new materials鈥攆or example, more efficient batteries, better energy transmission, all the things we need to decarbonize the economy. Those are areas where we鈥檝e already seen AI techniques provide powerful tools for predicting the structure of proteins from their sequences鈥擜lphaFold from Google DeepMind and Rosetta from the University of Washington.
Back in 1991, I was involved in a startup company trying to use machine learning for drug design. We were about 30 years too soon. Now there are many companies in this space, and they鈥檙e starting to see real success.
One project I鈥檓 currently involved with is in weather networks. If you look at a map of the weather stations around the world鈥攇round stations that record temperature, rainfall, wind speeds鈥攜ou see huge empty spaces in South America and Africa. At the same time, through the wonders of miniaturization and electronics, we can now build weather stations that are maybe twice the size of a Coke can, but 10 to maybe 100 times cheaper than traditional stations, and we have cellular data networks.
I鈥檓 involved in a project called TAHMO鈥攖he Trans-African Hydro-Meteorological Observatory鈥攚hich hopes to operate a network of, our dream is, 20,000 ground weather stations across all of Sub-Saharan Africa. We have 750 stations right now across 22 countries. It鈥檚 a nonprofit headquartered in Nairobi, with field technicians throughout those countries. My role, which is relatively minor, is to try to detect when a weather station needs a technician visit to clean it and replace broken components. It鈥檚 a statistical maintenance problem鈥攂ut surprisingly difficult.
Carmen: To solve, right? No, that is fantastic. For those of you regular podcast listeners, you know that this podcast was named after that statement, as Michelle Obama described Obies and 91直播 graduates and students as people who 鈥渞un to the noise.鈥
Tom, I guess I鈥檓 wondering what you would say to that statement. How do you, in your work or personal life鈥攈owever you want to answer the question鈥攔un to the noise?
Tom: I鈥檓 always looking for ways that I can make a difference in the world that go beyond just publishing papers and solving lovely technical problems that are fun. One of the ways I鈥檝e done that is by building a network鈥攁 sort of professional social network on my own campus and across the country鈥攐f people who know people who can introduce me to the right folks when I have an idea or see an opportunity.
So, when I saw an opportunity to think about wildfire, for example, I contacted someone on my campus, and she introduced me to two professors in the forestry school who were experts in that area. With the Africa project, that was more a sequence of coincidences. A colleague of mine, John Selker, a hydrologist by training, is one of the masterminds behind this Africa project. I said, 鈥淥h, that looks cool鈥擨 bet I could help.鈥 So I volunteered my time on that and saw what we could do. Actually, I鈥檝e had three PhD students finish, all working on these weather network problems.
Carmen: We thank you so much for your time and for being one of those Obies our students can look to, to know that what they want to achieve is possible. We鈥檒l be following you and your work鈥攁nd I鈥檓 sure calling on you to help us think about the Year of AI Exploration at 91直播. We鈥檇 love for you to come and speak and help us understand your work, so be on the lookout for another phone call from us.
Thanks for listening to Running to the Noise, a podcast produced by 91直播. Our music is composed by Professor of Jazz Guitar Bobby Ferrazza and performed by the 91直播 Sonny Rollins Jazz Ensemble鈥攁 student group created through the support of the legendary jazz musician.
If you enjoyed the show, be sure to hit that subscribe button, leave us a review, and share this episode online so Obies and others can find it too. I鈥檓 Carmen Twillie Ambar, and I鈥檒l be back soon with more great conversations from thought leaders on and off our campus.
Episode Links
Running to the Noise is a production of 91直播.