Keynote Address

6th Space Traffic Management Conference

Kelso, T.S., Keynote address, presented at the 6th Space Traffic Management Conference, Austin, TX, 2020 February 19.

Keynote Address

Thanks to the IAA and the organizers of this year’s conference—and especially Dr. Jah and Dr. Howard—for giving me the opportunity kick off this event.

It’s my responsibility today to get each of you ready to discuss the challenges that lay ahead of us in space traffic management (STM). And I’m going to do that in much the same way I did with my graduate students—by challenging your current view of the world and pushing you out of your comfort zone. And I’m going to ask a lot of questions, rather than having you believe I know all the answers. I will provide some answers today, but I encourage you to go back and try to answer them, as well.

I’m going to be as objective as possible and direct—perhaps brutally so at times. I am not attacking anyone or any organization, so if you think I said something like that as we go along, I’d ask you to rethink what I’m saying.

Let’s start by considering the framework we’re addressing. I think we all know that any good STM system requires good space situational awareness (SSA)—a knowledge of what’s up in Earth orbit, where it is all located and going or been, and what the various patterns of behavior are. And the foundation of SSA is data. It’s not just good data, it’s transparent data and processes. Let’s consider what that means.

Many researchers seem to assume they will have the data they need to support their particular research effort and move on. I will tell you the same thing I told every one of my graduate students: The first thing you do after making an assumption is attempt to validate it. If you can’t, then you should clearly state that in your research results. Let’s see if I can challenge some of your assumptions.

But first, why is this important? Because, if data is the foundation for the SSA we need for good STM, then proceeding without addressing the flaws and limitations of that data will not provide the solid foundation we need going forward.

Now, to be fair, many of us are reluctant to raise concerns about the data because of what I believe are misguided and outdated data policies. Many of these are couched in terms of national security. Let’s look at an example.

Many of you know that I run CelesTrak and that system is designed to share SSA data as widely and openly as possible. Increasingly, it also provides a suite of tools to better understand the space environment and the data that underpins that understanding. As such, I get to work directly with everyone from policymakers to satellite operators to researchers and the general public.

I often receive requests for help and I try to do whatever I can to provide that. In one recent case, after CYGNUS NG-12 departed from the ISS on Feb 1, it deployed 14 cubesats. Many of these researchers operating those satellites probably assumed they would get orbital data identifying their satellite and telling them where it was almost immediately after deployment. We got data for 5 cubesats within 1 day and for 3 more 5 days later. We were still missing data for the other 6 objects until Feb 11.

The university researchers for one of those cubesats asked me for help on Feb 5 because they could not contact their satellite and it wasn’t heard using any of the 5 existing and unidentified TLEs. I reached out to one of our commercial tracking networks and asked if they could help by providing data for the missing cubesats. Their response was that they did not track objects that were not in the public catalog. Can you think of why that might be the case? Because tracking a classified US satellite is like touching the third rail on the subway. Doesn’t matter whether you did that intentionally or accidentally—it’s going to hurt.

So, let’s look at some assumptions we see made every day and the associated realities. Everybody knows we need orbital data for everything we can track in orbit to perform STM tasks like conjunction assessment. And, because that data—in fact, no data—is perfect, we also need covariance—a measure of the uncertainty of the data. Many of you here today probably assume that the US government has that in spades.

Let’s start by saying that if they do, they do not share it.

To begin with, how do I know these things? Because as the Operations Manager for the Space Data Center (SDC), I am responsible for supporting safety of flight for almost 800 satellites every day—satellites operated by civil organizations like NASA, NOAA, and EUMETSAT to commercial companies like Intelsat, SES, and Planet. Thirty operators in all, from about as many countries. In that role, we also work closely with Space Track and 18 SPCS.

We get SP ephemerides almost every day but they do not include covariance. We do get covariance in Conjunction Data Messages (CDMs), but only for objects associated with events predicted by 18 SPCS. If they aren’t screening with maneuver data provided by operators, they will predict events that won’t happen or miss ones that will. And most of the screenings done by 18 SPCS use only their SP data, which does not incorporate planned maneuvers. So, we don’t get all the covariance data we need there, either.

And the reality is that limitations on predicting orbits without operators—all operators—providing maneuver information won’t be fixed by more and/or better sensors or even better orbit propagators. The best that can be done is to reduce the orbit determination (OD) delay to real time. Operators need to provide orbital flight plans just as airplanes do in our airspaces.

And if you’re assuming satellites don’t really maneuver that much, you may be a bit behind the times. Large constellations like SpaceX’s Starlink or OneWeb use near-constant ion propulsion, as do a growing number of satellites going to GEO. And even cubesats, like Planet’s Dove satellites, appear to be maneuvering as they change attitude profiles throughout their orbits. And soon, many cubesats will start using ion propulsion, too.

We also need data on object size to calculate probability of collision. We used to get that—in the form of radar cross sections (RCS)—until some years ago, when US data policy started withholding it. Yes, RCS has its issues, but I still don’t understand this decision.

We need data on mass, too. Why? Because the one thing that will change with better sensors is that we will see smaller pieces of debris in orbit. Now, the more objects we track, the more likely we will be to have conjunctions. If these new sensors provide improved covariance, that can eliminate a lot of these. But at some point, we will have cases where we have confidence that some small piece of debris may hit some satellite. And there will likely be a lot more of these.

If you’re an operator of a large constellation and getting a lot of warnings, how do you prioritize? You do that by managing risk—the probability of collision times the consequence of that collision—rather than just probability of collision or miss distance. Using orbits, covariance, size, and mass, you can calculate risk and focus on events that might cripple your satellite or, worse yet, adversely impact the space environment.

We need to know which satellites are operational to determine whether you may want to contact the operator to see if they have and are willing to share better data. CelesTrak provides the only regularly updated source of operational status. If it is available from other sources, it is not acknowledged or shared.

We also need to know whether a satellite is maneuverable, not only to see if they might be able to maneuver more easily to avoid a collision risk, but to ensure they don’t surprise you when you least expect it. 18 SPCS collects this data, but it is not clear how comprehensive it is and does not seem to be publicly available.

So, we need a LOT of collaboration with satellite operators and launch providers to collect this information. In fact, just because new sensors may be able to better track where an object was, predicting where it will be also depends on information like size, shape, mass, and attitude. You can estimate (guess) these values, but the more your estimate is off, the worse your prediction will be.

Of course, you may not even realize we don’t even get orbital data for a surprising number of objects in orbit. And I’m not talking about things smaller than 10 cm. So, how many objects do we track? I get that question a lot.

Let’s start with the number stated by 18 SPCS recently: 26,000. Now let’s break that down. We only get data for 18,300 objects in the public satellite catalog or SATCAT. But that SATCAT shows another 2,000 objects should have data. A small number orbit beyond Earth orbit. Where is the data for the rest?

Almost 1,300 of them are lost—not tracked in over 30 days. Many of these are tracked in a separate analyst catalog, but most of that data is not available to satellite operators or the public. More on that in a minute.

Another 500 are restricted for national security reasons. Some of these are rocket bodies or satellites launched decades ago and which are now long dead. Recent US policy directed release of this information for 200 of these objects, but that policy seems to be being subverted. Yet, withholding this data is actually undermining national security by jeopardizing safety of flight.

That leaves about 6,000 other objects in the analyst catalog. These objects are treated separately because 18 SPCS cannot determine what the objects are or even what launch they came from. In some cases, they may be tracking objects lost when they maneuvered, but can’t make the correlation. In other cases, they are still trying to sort out the data for things like debris events or launches with multiple payloads. We only get data for a little under 500 of these. That means additional data for at least 4,000 objects that are a threat to safety of flight are being withheld due to data policies and not any national security issues.

And there is missing data, too. There are almost 50 ‘holes’ (as of Feb 11) in the current SATCAT for objects that have been launched—like the 34 OneWeb satellites launched Feb 7—but which are still only tracked in the analyst catalog. And things like the over 2,200 pieces of debris from three CENTAUR R/B debris events over the past year, that are being tracked by our Russian colleagues, amount to only 144 objects in the US SATCAT.

I recently had a journalist tell me that this kind of stuff is “all very confusing to people and hard for us [in] the media to cover.” Yeah, it makes my head hurt, too, and it shouldn’t be this hard.

Now, this may sound like I’m picking on 18 SPCS—something I said up front I wasn’t going to do. I’m not. 18 SPCS does not set the data policies for what can be released. They cannot unilaterally release data even if they believe it is the right thing to do. They have difficulty tracking and processing the growing number of objects in Earth orbit because the US has failed on numerous attempts over the past three decades to upgrade hardware and software designed for a different era.

And it also sounds like I’m picking on the US government, but to be fair, the US has been the primary source of SSA information for the world for decades. Does the US have room to improve? Definitely. But this is an international effort and it’s past time for others to step up and do their part.

So, a lot of our issues seem to be more the result of data policies than physics. Let’s consider the long-term implications of that for the global space economy. Many of you have been touched by these decisions throughout your careers.

We’ll start in the classrooms where we’d like to encourage our young minds to pursue careers in STEM or at least be familiar with the concepts. How do you stimulate curiosity and engage interest without data? Every scientific inquiry needs data at some point and we teach science and math using data in the classroom. No easily accessible SSA data for the classroom means no college students prepared for those subjects.

Similarly, no data for college professors means they probably aren’t teaching SSA-related concepts and almost certainly are not doing research. Some of you here today probably are well familiar with how these data policies affect your research. If you can make the right government connections, you may have access to data, but otherwise, good luck.

If you aren’t doing research in SSA, you aren’t developing teaching curricula and graduating students steeped in the concepts we need to move SSA—and STM—ahead. And you aren’t developing the tools and techniques that those students need to take with them into industry.

And because we don’t have data, discerning things like patterns of behavior can be exceedingly difficult. I see amazing behaviors in the data we use in the SDC every day that reveal patterns that I suspect many of you are not aware of.

Are patterns of behavior important? If you think some categories of data are consistently better than others, I would suggest you haven’t examined enough data. Every system fails or has shortcomings. Understanding how to detect when certain data may have problems is vital to good SSA.

And to be blunt, analyzing patterns of behavior doesn’t mean hyper-analyzing some one-off event and spinning it into a headline-grabbing story. That over-sensationalizes the event and makes it seem out of the ordinary, when it isn’t. The general public watches intently for a moment and then quickly loses interest, thinking the threat is gone, we don’t know what we’re talking about, or both. This behavior will not elicit the focused, long-term attention needed to address resolving these problems.

Finally, if industry doesn’t get the people and ideas to advance the state of the art, we aren’t going to be making much progress to supporting whatever legal entity eventually takes on STM.

Let’s assume for a moment that we have upgraded our current tracking capabilities using some nascent or future system that provides everything we need: good orbits, realistic covariance, size, mass, operational status, maneuver capability. Of course, this won’t be free and we shouldn’t expect it to be. But it can cost considerably less than our legacy systems. Does that solve our problems?

It won’t if we don’t find a way not only to pay the organizations collecting the data and then share that data widely with everyone. It is difficult enough to arrive at the same interpretation of an event using the same data, much less make the same decision. Using different data because some operators can’t afford the good stuff, will lead to inconsistent and, eventually, regrettable decisions.

I know that’s a lot to take in, but pretending that these issues don’t exist is like pretending the emperor has clothes. And do you know what kinds of problems don’t get resourced and addressed? You guessed it: ones that you won’t admit exist.

If you’re not feeling at least a little uncomfortable at this point or even completely overwhelmed or annoyed, then I must have put you to sleep and I apologize for that. But if you’re awake now, this is the important part. We can solve these problems by a simple three-step process: (1) acknowledge the problems, (2) help to stop rationalizing why we can’t change current approaches and provide transparent access to the data, and (3) work together to find solutions.

We need to get comfortable with—or even be bold—speaking directly and openly about the problems we have and not allow others to stifle the conversation.

We need to critically assess existing restrictions on transparency and find ways to share data with everyone. Systems like GPS, Google Earth, and Planet have already shown us the way and coming technology will eventually take these decisions out of our hands, anyway.

And we need to work together as a global community—protecting the shared commons of Earth orbit—to find solutions. I look forward to working with each and every one of you on the road ahead and some spirited discussion here at this conference. Thank you.