Expert Talk

SDA TAP Lab, 2025 Feb 26

Introduction

Aloha, I’m T.S. Kelso and I am the creator, chief scientist, and CTO of CelesTrak. Today I’d like to talk story with you about what CelesTrak is, what it does, and how that can help with many of the things you are all doing to provide space domain awareness in the near-Earth space environment.

So, let’s get right into it. To do that, I would like to start by talking about what CelesTrak is by first covering its historical background—in the period from 1979 to 2004—and then dive into the major developments over the past 20 years that you might be most interested in. As we look at those developments, you should begin to more clearly see how CelesTrak helps improve SSA, particularly by the development and application of something we call Supplemental GP or SupGP data. And we’ll go through some recurring problems that we already provide solutions for.

As we go through all of this material, I’d like you to consider three primary Focus Concepts:

The first is that SSA is the foundation of everything we do in space. Here we define SSA as the knowledge and practice of tracking objects in space and the space environment itself. This concept really shouldn’t be anything new, since it is impossible to apply military force against a target—whether it is to attack that target, surveil it, or even to protect it—if we don’t know where that target is located. As a foundational concept, it is vital that we ensure its integrity and not just assume it exists.

The second concept is that “the best way to predict the future is to create it.” This quote is often ascribed to Abraham Lincoln, and while it is consistent with many things he said, it can’t be tied directly to anything he ever wrote or spoke about. Others credit this quote to Nobel Prize winner Dennis Gabor, the discoverer of holography, in his 1963 book, “Inventing the Future.” But he took a more negative perspective there saying, “The future cannot be predicted, but futures can be invented.” But the best source for that quote is Alan Kay—inventor of the computer mouse. This quote was Kay’s response when pressed by Xerox executives about how to predict the future of computer technology back in 1971. You’re going to see that a lot of what CelesTrak has done—and continues to do—is inventing the future for SSA.

The final concept is the one by Dusty Miller that is inscribed on the Eagle and Fledglings statue at the Air Force Academy: “Man’s flight through life is sustained by the power of his knowledge.” To invent the future requires each of us to commit to life-long learning to seek out not only the problems that bedevil us, but simple and effective solutions to address them. This is a never-ending quest in which we can never become complacent.

Historical Background

So, what is CelesTrak? Many people think of it as the place to go to get ‘TLE’ data for tracking satellites. That’s certainly how it got started, but it has evolved in many ways that you may not be aware of. So, let’s start at the beginning.

Many of you know I am an Air Force Academy grad from the Class of ’76. To me, that often seems like “a long time ago in a galaxy far, far away” and it’s probably before most of you on this call were born. I double-majored in Physics and Math and really wanted to learn astrodynamics, but at first the Air Force had other ideas.

After graduation, I went off to Whiteman AFB to be an ICBM combat crew member. One of the things we were expected to take advantage of was the University of Missouri MBA program available under the Minuteman Education Program. I wasn’t too excited about that, until I discovered it would give me access to their mainframe computer—via punch cards and paper printouts. And it was there that I got exposed to personal computers in 1977. When I got my first TRS-80 computer the next year, I started by porting the code I’d written for the mainframe to predict the positions of the planets.

Then, one of those events happened that literally changed my life. Skylab, which was supposed to be rescued by the still-delayed US space shuttle, was decaying from orbit in 1979. One day, I came home from work and happened to be watching the ‘local’ weather—from Kansas City about 70 miles away—when the weatherman described where to look to see Skylab pass over the next night. I called a friend and we made plans to look at the appointed time.

Sure enough, right as the weatherman described, there was a point of light moving across the sky. Watching that, my immediate reaction was, “I want to know how to do that!” When you think about it, the chance that everything would line up to produce that result was pretty slim. TV reception was often bad—we had no cable—and you had to watch live—there were no VCRs. And the weather at the time of the event had to cooperate. If all those things hadn’t aligned, none of us would be here right now.

Of course, ‘doing that’ meant figuring out not only where to get the software—in a time before the Internet—but also the data. Even finding out where any of that might be available would turn out to be a real challenge.

Fortunately, I had ended up being at the right place at the right time to get hand-selected by AFSPC/CC, Gen Hartinger, to be in the first Graduate Space Operations class of 18 at AFIT in 1981. The focus was primarily on operations research, but some of us managed to create a minor in astrodynamics and that was an amazing learning experience. And AFIT gave us access to DTIC—the Defense Technical Information Center—where we could access all sorts of technical documents—on paper, of course.

The primary way to learn anything current at the time was through computer and astronomy magazines. At one point, in the February 1980 issue of BYTE magazine, I saw an ad for a software program called SAT TRAK being sold out of a computer store in Colorado Springs. When I finally got a copy, I set about trying to figure out how it worked. It was written in BASIC and wasn’t particularly well documented, so I asked my class leader, who used to work in Cheyenne Mountain, what he thought. He looked at it and quickly replied, “Oh, that’s SGP4.” I remember thinking, “what the heck is that,” but DTIC was about to help answer that question.

The one really important part of that software package, though, was it explained how you could write a letter to NASA and request NASA Prediction Bulletins for satellites you wanted to track. And Part I of those bulletins was the two-line formatted GP data we are all familiar with.

The process took weeks to write a letter, then get a response, before NASA Prediction Bulletins started showing up via First Class Mail every couple of days. Okay, but how did you even know what satellites were up there and might be visible?

As I was trying to figure out all those things, I graduated from AFIT and headed off to the Blue Cube at Sunnyvale AFS, CA. My jobs were first to develop all the training to activate Falcon AFS—now Schriever SFB—and second to be the Chief of Operations for the GPS Mission Control Center launching the last two Block I satellites and planning for future launches off the US space shuttle (which never happened).

This assignment wasn’t my first (or any) choice, but again, I was in the right place at the right time—this time in Silicon Valley where people were trying to figure out how to connect computers via dial-up modems. We were developing software for electronic Bulletin Board Systems, or BBS, while I was also getting data from NASA almost every day and typing it up so I could visually observe satellites. One day I had this genius idea: “What if I set up a BBS and took the data I typed up and put that online for others to download?” That would save them the effort of having to request the data from NASA and type it up and allow them to take advantage of the software I created to minimize the chance of errors in transcribing the data—as if there were actually a lot of geeks out there that wanted to do that and could actually find my BBS number. But that eventually happened in 1985, just before I left to go to the University of Texas at Austin to get my PhD.

And things only grew from there. My ops experience in ICBMs and GPS helped me focus on providing tools and documentation to support things like interoperability. But even data interchange back then was a challenge. There were proprietary data exchange protocols that were different for different operating systems. And re-typing Spacetrack Report Number 3, to replicate all of the mathematical notation, was only possible in LaTeX—something I was learning to use to write my dissertation. And that had limited application until 1993 when the PDF, or Portable Document Format, first became available. And I still had to type all of the data in by hand while I continued to try to convince NASA to set up their own BBS, which also didn’t happen until 1993.

After finishing my PhD and serving as a professor at AFIT for six years, I was off to Air University at Maxwell AFB, AL—just as the Worldwide Web was being born. That was yet another opportunity to press the state of the art and start expanding the educational part of what CelesTrak offers, including writing columns for Satellite Times from 1994 to 1998. It was also where CelesTrak made its first appearance on the Web. Four years at Air University and then three more at AFIT as Associate Dean, Vice Commandant, and Commandant and I was off to HQ AFSPC as the head of analysis reporting to AFSPC/CC and CV—just days before September 11th.

Here all of that experience learning, applying, and teaching space operations came to a head. Of all the things I handled there—like being the DOD analysis lead for the Columbia Accident Investigation—the most important realization was that more than 80% of the comms going to and from our military theaters of operations in Afghanistan and Iraq were going over commercial satellites. And when I started pulling the thread on what we were doing to protect those assets, the answers were less than satisfying.

There were lots of ‘reasons’ and ‘beliefs’ for why we couldn’t do anything, but the bottom line—to be blunt—was that people were ill-informed and complacent—something a military officer simply cannot afford to be. But even as a colonel, beating down the constant misinformation to get anything done was more than frustrating, given the magnitude of what was at stake. Things like, “It will take a supercomputer and millions of dollars to do that and would still take so long to do that by time the calculations were complete, they would be outdated.” And, of course, “There is no way to distribute the data to everyone who needs it in a timely fashion.”

Eventually, I decided the best way to effect needed change was to retire after 32 years of service and do that from the outside. And I set out to educate the space community on the magnitude of the conjunction assessment problem and—like any good staff officer—to show that many of our challenges could be solved with a little creativity. And CelesTrak was going to be instrumental to that effort.

The Next Phase

This was a major transition point for what we do on CelesTrak. We were moving from making the data easily available, along with the associated documentation and educational materials—essentially inventing the future—to the application phase of our story.

And this is a good time to review our Focus Concepts, particularly Dusty Miller’s quote. A lot of what got us from the beginning of the story to here was the result of curiosity and a willingness to learn.

Those who do not constantly strive to learn are complacent. They either think they know everything they need to know (or actually think they know everything) or that whatever they don’t know is unknowable or at least can’t be figured out by our adversaries without substantial time and cost.

These thoughts manifest in two forms in the military environment: Either we have everything under control or that certain things are beyond our control (or those of our adversaries). Those of you who have studied military history should be able to come up with lots of examples where these ‘beliefs’ led to bad outcomes for the complacent. We must move beyond these ‘beliefs’ by constantly challenging what we know and seeking to learn.

So, our first challenge in 2004 was to show that we actually could provide conjunction assessment at reasonable cost and make that readily available to whoever wanted it, in a timely manner. By now—twenty years into our story—it was pretty clear that waiting on government to solve our problems would not be sufficient. We would have to continue to invent the future and turn to existing low-cost commercial COTS solutions.

The Birth of SOCRATES (2004)

The first phase of addressing our challenge was to show that the existing beliefs about cost and timeliness were untrue. Believe it or not, it actually only took me about two weeks to do that—literally on some serious drugs and with one arm tied behind my back—a story for another time. I started with a standard desktop computer, existing COTS software (STK), and the publicly available GP data for the catalog. I designed software to ingest the GP data from CelesTrak into STK and configure STK to calculate all of the close approaches for the coming week within 5 km for any payload—nobody seemed to know which ones were active back then. The software then took the resulting data and formatted it for use by users of CelesTrak, allowing a searchable output—by name, International Designator, or catalog number—and sorted by TCA, min range, or max probability.

That system was SOCRATES—Satellite Orbital Conjunction Reports Assessing Threatening Encounters in Space. SOCRATES became the first SOA, or Service-Oriented Architecture, for conjunction assessment. That is, it performed all of the calculations just once on the server to allow clients to search and review the results. For the operational run on 2005 Aug 17, 2,686 payloads were screened against the full catalog (minus restricted, lost, and analyst sats) of 8,593 RSOs. That run took just 85 minutes to analyze and upload results for use.

Now that we had dispelled the beliefs that it wasn’t possible to do conjunction analysis calculations in an affordable and timely way that made the results easily available, it was time for the second phase of this effort—to examine weaknesses in the process.

The first weakness lay in the use of GP data for the calculations. SGP4, and the GP data it produced, was not designed to meet the accuracy requirements needed for conjunction assessment. That process was designed to maintain track custody for RSOs to avoid creating UCTs. But the GP data was the only publicly available full-catalog data.

The second weakness lay in the fact that the hardest RSOs to track were the ones that maneuvered. Using non-cooperative tracking meant maneuvers had to be detected and processed after they occurred, which could take days if not weeks. Undetected maneuvers present a risk to active satellites, which might require collision-avoidance maneuvers—if only the location of the maneuvering object was known.

And the last weakness was that we didn’t have any source of information that would tell us which satellites were active. We needed that to ensure we produced actionable data—there is little point in producing warnings for a satellite that can’t maneuver. And we needed awareness of which satellites could maneuver to avoid both satellites maneuvering into each other—perhaps while trying to avoid a collision—because their operators weren’t communicating.

The good part about SOCRATES was that the process was robust to changing data sources, which allowed CelesTrak to focus on how to improve them incrementally. In fact, that meant we could segment the data to allow improvements where we had the greatest weakness.

Thinking about these problems revealed a simple solution. If the hardest thing to do was maintain track custody for active satellites because they maneuvered, why not just ask satellite operators to share the data they already produced to perform their mission?

Since they were the ones planning and executing the maneuvers and had to incorporate them into their data products for daily operations, we could just ask for a copy of those data products and use that data instead of GP data when available. The data would have higher accuracy due to mission requirements, would be tagged to a specific object to avoid mixing observations and getting bad orbits, include recent and planned maneuvers, and would allow us to know which satellites were operational and maneuverable. And, if used for SSA, it meant less time would need to be spent on catalog maintenance, so that limited SSA resources could be focused on uncooperative RSOs.

All we needed now was to get satellite operators to buy into the concept of sharing their data. Believe it or not, we started by asking Iridium to share their data in July 2008, followed by a trip to Vandenberg in Aug 2008 to offer the new JSpOC a way to adapt SOCRATES to use their SP data. We weren’t able to get data from Iridium, because their Boeing contractors insisted it was ITAR, and JSpOC seemed uninterested, even though the effort would be at no cost under their existing contract with AGI.

It wasn’t until Intelsat approached us at about the same time and asked, “Hey, if we gave you our ephemeris, could you use that in SOCRATES instead of GP data?” that this phase started getting traction. By the end of 2008, we had Intelsat, SES, Inmarsat, and Telesat—an international consortium of GEO satellite operators—sharing data in the SOCRATES-GEO prototype. Then, in February 2009, Iridium 33 and Cosmos 2251 collided and people started realizing the need to do the same thing for LEO. Eventually, that led to the formation of the Space Data Association (the original SDA) and even JSpOC eventually started accepting ephemerides and using them for conjunction assessment. By the time I retired for the second time, SDA was screening over 700 satellites for 30+ operators—Including those from NASA, NOAA, EUMETSAT, and the UK’s MOD—from about as many countries.

CelesTrak continues to use the ‘new’ SOCRATES Plus to demonstrate the magnitude of the conjunction problem and to illustrate ways to make the data more actionable. Today, we are screening 11,055 active satellites (the primaries) against a catalog of 27,141 RSOs (the secondaries), that actually includes some analyst sats.

Along this part of the journey, we were learning a lot about the data. Not only the data provided by the US DOD, but by satellite operators, as well. Of course, we advocated for the use of standard data products by satellite operators, but also worked to convert satellite operator data provided in other coordinate systems and formats, when needed—addressing limitations of legacy software.

One of the things we learned was that there are lots of issues with the data that can teach us even more. After all, every system will have issues and it is imperative to look for those issues and use them to learn how to avoid them in the future. And we needed to use those lessons-learned to check every conjunction result. Using the data blindly is just one manifestation of the “we have it all under control” aspect of complacency. And then you were likely to learn about those issues at the least opportune time.

We knew there were issues with the GP data from anecdotal evidence where satellites weren’t where we expected them to be. But how could we quantify how often that happened or how bad the data might be?

Realizing the Need for Supplemental GP Data (2007)

Then one day in December 2007, I got an e-mail from a CelesTrak user in Brazil, reporting that the position calculated by SGP4 using the latest GP data for one of the active GPS satellites was off by over 20,000 km from the position calculated for that object using data from the GPS almanac. Wait, what?! He asked if I knew which one was right. I had actually done a detailed paper assessing accuracy of the GP data for GPS satellites in January of that year—“Validation of SGP4 and IS-GPS-200D Against GPS Precision Ephemerides,” so I had a pretty good idea what the answer was.

First, I took the almanac and GP data and verified the user’s report. Then I checked the NGA precise ephemerides to see what they showed. As expected, the almanac was correct. And this wasn’t the first time we’d seen this problem, it was just the worst one. How could this even happen? Even though both JSpOC and 2SOPS—the organization operating the GPS constellation—were both part of the same US Air Force, they simply didn’t talk or share data. So, while 2SOPS performed orbit determination and produced the almanacs and NANUs—to advise users of issues with the GPS constellation—JSpOC didn’t use any of this data for SSA, partly because they couldn’t ‘translate’ the data.

Now, 2SOPS—or the GWCC today—issues NANUs days ahead of any planned maneuver—in a Forecast Delta-V (FCSTDV) message—since that would impact GPS accuracy until a new orbit solution was determined. And once that maneuver happens, they issue another NANU—a Forecast Summary (FCSTSUMM) message—setting that satellite healthy in the almanac. So, if JSpOC couldn’t ingest a GPS almanac, how would they know where to look for a GPS satellite after it maneuvered, other than searching for it with limited resources—in this case optical sensors like GEODSS?

Again, the solution was pretty straightforward. We knew the almanac data was correct and we knew JSpOC needed GP data—in the TLE format—for their analysts or SSN sensors to use to relocate the satellite. Since STK had tools to fit observations—like those from an SSN sensor—to produce an SGP4-compatible solution, why didn’t we just propagate the GPS almanac IAW the GPS ICD and then fit that ephemeris using SGP4 to produce supplemental GP data—or SupGP? Then, whenever JSpOC ‘lost’ a GPS satellite, they could just check CelesTrak to get data they could use to relocate that satellite. They didn’t need to ask for help or even let anyone know they were checking—and CelesTrak doesn’t even have user accounts, so we would have no way to even know. The problem would just go away.

All of the results of the GPS validation test case were written up on the new SupGP page and reported to JSpOC in January 2008. And it turned out it was so easy to do, that we applied it to the GLONASS constellation next, using their rapid ephemerides. The advantage to using these two constellations was that we had centimeter- or meter-level truth to compare our results against.

Working on that project, it became quickly apparent that there were lots of advantages to SupGP data. First, it should be more accurate, since ‘observations’ were tagged to a specific object—you couldn’t mix observations from different objects, like what might happen in GEO clusters. It would not be subject to the vagaries of weather or day/night for optical sensors. It would include the effects of the latest maneuvers and perhaps even planned maneuvers. And perhaps most importantly, it worked with SGP4, which was available by this time in almost any satellite tracking software—thanks to efforts by CelesTrak back in the late 1980s to highlight the need to use SGP4 with GP data and provide the source code to do that correctly. And GP data was much smaller than ephemerides, requiring less bandwidth to transmit and less storage. And you could continue to propagate the SupGP solution beyond the end of an ephemeris, if needed. But, perhaps best of all, it avoided needing to develop software that needed to handle all sorts of different data formats and reference frames, which then had to be independently validated.

As the years went by and CelesTrak built rapport with satellite operators and got access to more of their data, the number of objects we produced SupGP data for continued to grow. In fact, today, CelesTrak provides SupGP data for 8,272 of the 11,054 satellites with GP data (74.8%). And that means we have eliminated issues with lost satellites and missed maneuvers, right? Sadly, no.

Even though it has been 17 years now since CelesTrak first started producing daily SupGP data for the GPS constellation, 18 SDS continues to lose these satellites following maneuvers reported by GWCC in FCSTDV NANUs. The latest case involves not one but two GPS satellites that maneuvered in early February and have been lost for 17 and 20 days with discrepancies of 2,785 km and 1,001 km, respectively. PRN02 seems to have been located as of 2025 Feb 25, but PRN28 has updated data that seems to be associated with some other object and is still 1,072 km off.

In fact, this type of situation—where we have SupGP data for a satellite but 18 SDS has lost that satellite—has become so prevalent that we created a new tool to make finding—and fixing—those cases as easy as possible. Spoiler alert, it only took a little over a day to create this capability and it assesses the ~8,500 SupGP records so quickly—in about 10 seconds—that we run it every half hour.

By now, you may be thinking I’m making all of this up, but one of the things we are adamant about on CelesTrak is showing our work. So, let’s get down to looking at that.

The first problem we run into—and that many satellite operators are not aware of—is that even though 18 SDS gets satellite ephemerides for a lot of satellites via Space Track and uses them for conjunction assessment, they do not use those for SSA. Just because it seems perfectly reasonable that they would, does not make it true, and relying on the false assumption that they know about recent maneuvers is a dangerous choice. Whether you believe it or not, we will show you many cases where that clearly cannot be true.

So, let’s look at that tool—the SupGP vs. GP Comparisons tool. It allows us to quickly assess any satellites we have SupGP data for and perform various comparisons. When the tool comes up by default, it is sorted by the worst results comparing the 18 SDS GP data to CelesTrak’s SupGP data. You can see from this example, that the RMS differences can be pretty extreme, with the top one being 11,200 km, at the time of putting this presentation together. How can that even be?

Well, the O3B satellites are in equatorial orbits at an altitude of ~8,000 km, so if the two orbit solutions were on opposite sides of the Earth in the same orbit, the difference could be almost 29,000 km. So, this is a pretty bad case, but definitely possible. And the problem arises because only a few optical sensors can see objects in these orbits. And optical sensors are typically limited to nighttime operations with good weather. Having bad weather when even a small maneuver occurs can make recovery difficult.

But how do we know how good the SupGP data is? Well, we also have an RMS difference column for the SupGP data. That compares the original source data—in this case SES ephemeris uploaded to Space Track for 18 SDS to use in CA screening. When we fit that ephemeris with SGP4 to produce the SupGP data, we obtain an RMS difference of the fit as a byproduct. Over the period of that fit—a 6-hour span from the time of this comparison run—the RMS difference for that fit is 47 m. Now, the SES ephemeris might be off, but it is unlikely to be by much. And the SupGP data matches it extremely well, so the 18 SDS RMS difference of 11,200 km definitely points to a problem. And that result is probably not too unexpected, given that the latest data is 45 days old.

Any of these columns are sortable, so you could also sort on the SupGP RMS to see which is worst and it turns out it is the one for STARLINK-31664. If you click on the graph icon there, you will see this satellite is in a rapid decay and there is an unexpected increase in orbital altitude right at the end, which is probably a glitch. As I said before, any system can fail. But this value shows us that neither the GP nor SupGP data should be considered reliable for this satellite. This satellite actually decayed just after this report.

As you explore this table, you will notice that assumptions that there is a strong correlation between data age and RMS are not supported—it all depends on the nature of the orbit and how much a satellite might be maneuvering.

So, the Comparisons table shows the problem, but what about the solution I promised? Well, in the very first column on the left, there is a link to the latest SupGP solution. So, for O3B FM4, 18 SDS could click on the icon in the first column and get the latest fully SGP4-compatible SupGP solution in their TLE format and pass that to their analysts—who might actually be tracking that object in the analyst catalog but not know what it is—or to the SSN sensors, like the ones here on Maui—to see if there is an object at that orbital location. If there is, in all likelihood it is O3B FM4.

It really couldn’t be easier. One table to highlight the worst disagreement between the SupGP data and the 18 SDS GP data, and when the SupGP RMS value is good, click on the link for the SupGP data to resolve the problem. Done properly, you should be able to quickly reduce the 279 cases that have a GP RMS of 25 km or more to zero and locate 2 active satellites on the Lost List.

Understanding Force Models

Now you may be thinking, “But we have other data that is better than GP data.” So, let’s examine that assumption. Of course, you would be talking about the SP (Special Perturbations) data or maybe even the new XP (SGP4-XP) data. Certainly, you’ve been told how much better these data types are. But is it true?

If you think through this assumption, you should realize the force models for these two data types are better than that for GP data and you would be correct. Before we continue, though, what exactly do we mean by a force model? As Isaac Newton observed when he created his laws of motion, an object at rest will remain at rest unless it is acted upon by an external force. In a relativistic sense, at rest is relative to any inertial frame of reference. If we know the forces acting on an object, we can predict that object’s motion over time.

In near-Earth orbit, we need to consider forces of gravity—both of the Earth and third bodies like the Sun and the Moon—as well as the nonuniform density of the Earth. We also need to consider things like drag and solar radiation pressure. SP can handle higher-order fidelity for the Earth’s nonuniform gravitational field better than SGP4 can—and it can even include solid-Earth tides. And SGP4-XP includes solar radiation pressure that SGP4 does not.

Are there any other forces we’re not considering, though? Oh yeah, the one that made this such a difficult problem in the first place—maneuvers. None of these force models include maneuvers. If you miss a maneuver in the observations, you’re going to get a bad orbit, regardless of the fidelity of the propagator.

Let’s examine the effects of incomplete force models for a moment and start with something simple. Let’s look at a car driving through a city and let’s start by assuming a very simple force model—none. The car enters the city grid at a constant speed and cannot accelerate—speed up or slow down. If we took observations at the circles, we would expect the time between successive observations to be equal, as would be the velocity. And if we compared the observations to the data from our propagator, they should match exactly—the RMS difference would be zero.

Now, what happens if we start to notice that things aren’t lining up right at the observations? The time and velocity measurements are off and our RMS difference starts to increase—that is, our fit starts to degrade. Remembering Newton’s law, we would have to assume there was(were) some other force(s) acting on our car. But where? How?

If we assume cars do not have brakes or accelerators (or rarely use them), perhaps these accelerations are due to drag and winds in the urban canyons which would dominate in the direction of motion. We might not have thought that those effects were important before, but maybe we should add drag into our force model. When we do, we find that improves our RMS difference—our fit.

Now, what happens if we have cars that do have accelerators and brakes and they are in traffic that ebbs and flows. Those would be considered maneuvers. But when we apply our force model that does not include maneuvers, what happens to those unmodeled forces? Well, they’re going to get absorbed into the drag force. That’s exactly what happens in not only the GP data, but the XP and SP data, as well. Depending upon the magnitude of these forces, the drag term may be able to reasonably adjust the orbit, actually masking the fact that there are maneuvers.

But the reality is that we are fudging the solution and don’t know when our assumptions are going to break down. What if we have a large maneuver? That might cause us to miss an observation and continue to apply resources looking for it along the original path. Even if we did somehow pick up that object later, the discrepancy with the original path might lead us to ignore that as being the original object and add it to the analyst catalog. And if a correlation is made without knowing where the maneuver occurred, we might be better off discarding the previous data—adjusting our fit span—to discard those before the maneuver(s), rather than degrade the quality of our solution. And if we don’t properly model the maneuvers, our model will have the largest prediction errors around the times of the maneuvers—past or future.

It’s a hard problem, but the SupGP data fixes a lot of that. Using Starlink as an example, SpaceX collects GPS observations and then downloads those to their flight dynamics team with a known association. They already know about past maneuvers and don’t have to find and process them. They process planned maneuvers into their ephemerides—along with the rest of their force model. To recreate the orbit, you don’t have to know the actual force model—regardless of its complexity—all you need to know is how to interpolate the data.

Now, let’s get back to our comparisons. Of the (as of now) 284 cases where the 18 SDS GP RMS is more than 25 km, 261 of those are for Starlink satellites. SpaceX uploads their ephemerides to Space Track for CA screening 3 times each and every day. It should be pretty clear at this point that 18 SDS is not using that data for SSA. If it’s still not clear, I can share all sorts of other examples at another time.

By now, I hope you are understanding the value of using operator-provided data in improving our SSA. We could use that ephemeris data directly, but there are a host of issues that make that more difficult in many situations due to things like the sheer amount of data needed. And DOD’s legacy systems aren’t designed to handle that data—in its various formats and reference frames. The SupGP data serves as an intermediary to the full-precision ephemerides to improve overall SSA.

Implications for Conjunction Assessment and Safety of Flight

But at least 18 SDS is using those ephemerides for CA, right? Well, not exactly the way you might think.

So, we know that many satellite operators upload ephemerides to Space Track every day for 18 SDS to screen for close approaches. This is the result of the work with SOCRATES back in 2008 to show that it could and should be done that way. And if you have the data used, it should be easy to verify that that the calculations for a particular close approach were done properly. Of course, just because it is easy doesn’t mean it shouldn’t be checked—each and every time.

Many years ago, when I was in charge of SDC (Space Data Center) operations, we got LOTS of CDMs (Conjunction Data Messages) from 18 SDS for our operators. We had a pretty tight process where operators uploaded their data to us and we uploaded it to Space Track in a way we could identify whether a particular CDM was using the same ephemeris as we were. Even though we were doing everything we could to reduce the latency involved in producing results, it wasn’t unusual that the CDMs we got would be using an earlier ephemeris. But on occasion, we would find we would receive CDMs using the same ephemerides and the results were matching ours.

That’s great. It worked once, so we don’t need to do that any more, right? Wrong.

One day, I noticed a CDM that reported a close approach that used the same ephemeris as we did but had a dramatically different set of calculated circumstances. How could that be? I spent a lot of time trying to see what we might be doing wrong. Then, as I was looking over the raw CDM data for the second or third time, I saw it. In the optional data that reported the apogee and perigee for this GEO satellite, it showed a negative perigee. Wait, what? Yes, it showed a perigee below the surface of the Earth.

I immediately reached out to 18 SDS to try to determine how that could be—and how a negative perigee wouldn’t be flagged as an error. I asked how they could interpolate the ephemeris data to come up with that result. The response I got puzzled me at first: “Oh, we don’t interpolate, we fit.” What does that mean? 18 SDS couldn’t seem to answer that question.

Eventually it dawned on me. When the CDM format was created, it was done via the CCSDS standards process. DOD wasn’t interested in going through an extended standards development and already had a process and format (well, specific data elements) to do that. DOD insisted that the community use their format. In that format, it included yet another way to represent orbital data.

At the time, I was pretty adamant that this was a mistake. CCSDS had already developed three orbital data formats: OEM (Orbit Ephemeris Message) for ephemerides, OPM (Orbit Parameter Message) for state vectors, and OMM (Orbit Mean Element Message) for GP data. I insisted that there was no reason why both objects involved in a conjunction would use the same type of orbital data and that rather than converting a standard format to the state vector being proposed for the CDM (via some unknown process), the message should include the original OEM/OPM/OMM data actually used for the calculation.

Of course, the CDM was designed to handle SP data, which was in the form of a VCM or Vector Covariance Message. At the time, JSpOC still hadn’t come to accept why they should screen using operator ephemerides, as SOCRATES and SDC had already been doing.

If you look at the CDM format, you find there are two state vectors and an associated set of force models for each. And guess which force isn’t included? That’s right, maneuvers. There is no way to include maneuvers, as can be done with an ephemeris. Of course, to get everything to match, you have to re-fit the ephemeris with the force model specified in the CDM. That works fine if there are no maneuvers. But if there are maneuvers, you have just smoothed out any maneuver information in the ephemeris and if a close approach occurs around a maneuver, the fit will be worse, as we discussed earlier.

It’s still not clear what part of an operator ephemeris is being fit to produce a CDM, but the fit occurs before the calculations.

This is yet one more example of people not thinking through the implications of a decision beforehand and not checking—or validating—their assumptions on each and every event. And once again, this is a situation where there is an easy solution—that was already being used in the SDC.

So, why does this matter? Well, right now we’re only considering objects where we can get authoritative data from cooperative satellite operators. What about those operators that are unwilling or unable to help?

Looming SSA Challenges

Let’s start with SpaceX’s Transporter and Bandwagon launches that may have dozens or even more than a hundred payloads.

Just looking at the last five of these launches, you can see there were anywhere from 29 to 131 passengers on these launches (not everything has deployed from the T-12 mission yet). You can also see when each launch occurred and when we first got GP data for any of these objects. That delay has been at least 9 days and as much as 29 days on T-12. That’s a long time to have dozens of objects in Earth orbit that don’t have data good enough to put in the GP catalog and can’t be effectively screened for potential collisions.

It also makes the process of identifying these objects difficult. If a satellite operator has to scan through dozens of objects with a high-gain—narrow beamwidth—antenna, weeks after launch, their satellite may already be dead. Why? Because they assumed DOD had it all under control—which DOD regularly tells people—and they assumed they would know their satellite’s orbit and identification right after it is deployed.

Failing to plan for reality may mean that the operator couldn’t perform required early orbit activity—like activating power systems—or communicate with their satellite to know about and deal with an unplanned emergency before orbital data became available. You can see that large percentages of each launch have passengers that are not identified, and some even decayed before ever being identified.

Of course, there have been similar launches done by Russia, India, and others and we expect this trend to continue. In fact, Transporter-13 is scheduled for next month [March 2025].

Now, just like we do for every Starlink launch—where delays in the release of 18 SDS GP data can be 5-8 days—CelesTrak produces ephemeris-based SupGP data for any satellite operator that provides us ephemerides. So far, that has only been Planet, who typically has many dozens of their satellites on these launches. We have been getting Planet’s ephemerides—every 2 hours—at least as far back as 2017. CelesTrak produces SupGP for these satellites within 2-4 hours post-deployment. And we can definitively match the Planet satellites to the 18 SDS GP data within minutes, as soon as that data becomes available.

And what about the rest of the world? Right now, China has 3 Starlink-like constellations already launching that are each planned to have 10,000 to 15,000 satellites. The first Qianfan launch of 18 satellites took 7 days before we had any data. The second one took almost a month. 18 SDS asserts they are screening against those RSOs as analyst sats, but if they won’t release the data from the analyst catalog until the quality is good enough, that seems unlikely to be helping protect safety of flight, as we’ve already seen.

And we’re continuing to build up what you might think of as ‘technical debt.’ Problems we are forced to accept now that could have been avoided up front. There are actually 889 objects in the 18 SDS SATCAT that 18 SDS seems not to even know whether those are payloads or rocket bodies. 227 of those decayed from orbit without ever being identified. There are 37 cubesats deployed from the ISS—usually one or two at a time into a protected regime—and 34 of those decayed without ever being identified. And the numbers for the PRC, which the US considers an adversary, aren’t particularly good, either.

CelesTrak has better numbers, because we work with satellite operators and amateur radio users to identify many of these objects. Even for ones we can’t specifically identify, we can assess orbital data and use open source information to determine which ones are highly likely to be payloads—and we can monitor that over time. It seems that DOD would be putting more effort into doing the same thing, but their data suggests otherwise. And even when CelesTrak works with operators to verify their analysis and pass that to 18 SDS, it is often ignored.

Conclusion

For now, that wraps up our presentation. But the story continues. We’ve covered a lot and there is a lot more going on at CelesTrak, but that would take more time than we have today. Hopefully these examples have raised your awareness—not only of the problems, but also of possible—and already existing—solutions. And the national security implications should be obvious—if they aren’t to you, I can assure you they are to US and allied adversaries. I look forward to your questions.

Author's note: As stated above, any system can (and will) fail upon occasion. That, in itself does not constitute a failure. Knowing that it has failed and doing nothing about it, however, does. —T.S. Kelso