A neural network known as ExoMiner has been performing a task that previously required years of meticulous human labor somewhere in a server room, far from any telescope or observatory. It has been studying starlight. Each of the millions of data points represents a fractional dimming in the brightness of a far-off star; if you know how to interpret it, this nearly undetectable signal could indicate that a planet has just passed in front of the star. ExoMiner is capable of interpreting it. It is superior to the human experts who trained it in a number of quantifiable ways.
As a result, more than 300 hitherto unidentified exoplanets were discovered using data gathered by NASA’s Kepler Space Telescope. The mission came to an official end in November 2018 when the spacecraft ran out of fuel and was retired. Kepler is no longer making observations. However, the information it collected over the course of almost ten years of operation—covering light curves from more than 150,000 stars—continues to yield new findings. The telescope has vanished. The archive will remain intact.
AI & Exoplanet Discovery — Key Information
| Topic | AI-driven exoplanet discovery using machine learning on telescope data |
| Key AI System (1) | ExoMiner — deep neural network developed by NASA; confirmed 300+ exoplanets from Kepler data |
| Key AI System (2) | RAVEN — AI pipeline developed at the University of Warwick; applied to NASA TESS data |
| RAVEN Results (2026) | 118 planets confirmed (31 newly identified); 2,000+ high-quality planet candidates |
| Data Sources | NASA Kepler Space Telescope (2009–2018); NASA TESS (launched 2018, ongoing) |
| Stars Analyzed by RAVEN | Over 2.2 million stars from TESS’s first four years of data |
| Lead Researcher (RAVEN) | Dr. Marina Lafarga Magro, Postdoctoral Researcher, University of Warwick |
| Detection Method | Transit photometry — detecting dips in starlight caused by orbiting planets |
| Notable Finds | Ultra-short-period planets (orbit under 24 hrs); “Neptunian desert” planets; multi-planet systems |
| Total Confirmed Exoplanets (to date) | Over 5,000 (as of early 2026) |
| Publication | Monthly Notices of the Royal Astronomical Society (MNRAS), March 2026 |
| Official Reference | science.nasa.gov/mission/tess |
It’s worth stopping to consider this. Through data being reanalyzed by a machine learning system, the notion that a defunct spacecraft is still adding to our catalog of known worlds says something subtly amazing about the state of astronomy. Telescope time and data collection were not the bottleneck in planet hunting for decades. It was the ability of humans to endure everything that returned. Large and largely unread, the Kepler archive contained signals that were too weak, too unclear, or just too many for research teams to fully analyze. That math was altered by ExoMiner.
Fundamentally, ExoMiner recognizes patterns at a scale and consistency that is unmatched by human analysts. Kepler used what astronomers refer to as the “transit method” to find potential planets by observing minute variations in a star’s brightness. Not every dip represents a planet, which was the issue. Similar signals are produced by eclipsing binary stars. It also includes instrumental noise.
Spots, flares, and the typical turbulence of a living star are examples of stellar variability that can produce shadows in the data that appear, on the surface, to be something crossing in front of the star’s face. To deal with these false positives, complex processes were created by human experts. After learning and internalizing those processes, ExoMiner processed the Kepler catalog more quickly and accurately than any research team could accomplish through manual labor.
The University of Warwick’s more recent work gives the same fundamental narrative a new angle. There, an AI pipeline known as RAVEN was developed by a team and used data from NASA’s Transiting Exoplanet Survey Satellite, or TESS, which launched in 2018 and has been scanning the sky ever since, concentrating on bright nearby stars.
Over 2.2 million star observations from TESS’s first four years were examined by RAVEN. The results were impressive: over 2,000 high-quality planet candidates, nearly 1,000 of which had never been found in any previous catalog, and 118 confirmed planets, including 31 completely new ones. The study’s lead author, Dr. Marina Lafarga Magro, called it “one of the best-characterized samples of close-in planets assembled to date.” That is meticulous scientific terminology for what is, in reality, a huge haul.
There are some truly unusual things that RAVEN discovered. Planets that complete an orbit around their star in less than a day are among the confirmed worlds; they race around at such close distances that the surface conditions would be, to put it mildly, uninhabitable.
Others belong to a region known as the Neptunian desert, where planets of a particular size are statistically uncommon due to the gradual removal of their atmospheres by stellar radiation at that distance. It’s a little puzzle to find them at all. Finding multiple of them in a single pass raises unanswered questions. The science there is still being worked out, but it’s possible that they are survivors of some evolutionary process that most similar planets don’t make it through.
It’s difficult to ignore how similar this is to what happened in other domains when machine learning was introduced with sufficient training data. An analog of this occurred in medical imaging, where algorithms started to identify early-stage tumors in scans that radiologists had cleared. This was not due to carelessness, but rather to the limitations of human attention across thousands of images. In general, the machines lacked intelligence.
They were more resilient, quicker, and more reliable. Something similar is currently happening in astronomy, and the comparison is instructive. The algorithm discovered two planets, Kepler-90i and Kepler-80g, that had been sitting undiscovered in a well-researched dataset when Google and NASA first used deep learning to analyze Kepler data back in 2017. The proof of concept was that. The proof of concept is followed by ExoMiner and RAVEN.
The catalog currently contains over 5,000 confirmed exoplanets. That figure was less than 2,000 ten years ago. The acceleration has been fueled by improving our understanding of the data that has already been gathered, not just by larger telescopes or more ambitious missions.
Researchers are now able to identify molecules like carbon dioxide and water vapor in air that exists light-years away thanks to the James Webb Space Telescope’s detailed atmospheric spectra of some of these worlds. AI is also being used on those spectra, speeding up the characterization of previously discovered planets and focusing Webb’s search for potential candidates.
Observing this over the past few years has given me the impression that the process is compounding in a way that wasn’t entirely foreseeable even ten years ago. New targets, data, and training sets for the subsequent generation of algorithms are produced by each discovery. It’s still unclear if any of the 300 exoplanets that ExoMiner found from Kepler’s archive are actually Earth-like, but the fact that they were there, waiting in the data of a closed telescope, and were eventually discovered by a machine that learned to read starlight is in and of itself a sort of answer.
