top of page

Increasingly, communication over the Internet is not facilitating communication between human beings, but between things. In fact, it’s estimated that by 2022, 45% of the traffic over the Internet will be between and amongst things, rather than people. By 2020, 212 billion “Things” will be connected to the Internet.


While this is impressive, it should also be somewhat alarming. Using the Internet as the glue to facilitate all of these machine-to-machine interactions necessarily assumes that the end-to-end continuous data flow that allows TCP/IP to work for the traditional Internet exists in the machine-to machine environment. And that’s often not true.


Let’s think about the kinds of devices being connected, why they’re being connected, where they’re being connected, and what their capabilities are. Many people might assume that the largest market for IoT would be retail. After all, we’re bombarded by commercials pushing the latest home security systems—or even home management systems that integrate voice-activated power and HVAC controls with advanced security systems that not only arm alarm systems, but also physically secure the home. Retail IoT systems like these are only forecasted to reach 1% of the total value of the IoT market by the year 2025.


The biggest potential market for IoT appears to be in the health care industry. Connected devices can play a significant role in healthcare applications through embedding sensors and actuators in patients and their medicine for monitoring and tracking purposes. IoT is already in wide use in clinics to gather and analyze data from diagnostic sensors and lab equipment. There is an increasingly large movement towards embedding devices in patients themselves to monitor health and/or actuators implanted in the patients’ bodies. These are necessarily small, low power devices that are unpredictably mobile—all factors contributing to disruption or delay.


The second largest market for IoT is forecasted to be industrial manufacturing. Not only have robotics been widely deployed to reduce human labor costs, but there has also been an explosion of sensors that can detect errors immediately and generate repair requests complete with necessary maintenance and repair information. Except for certain environments, traditional Internet infrastructure is readily available.


Third place in the forecasted IoT market is monitoring the production and distribution of electrical power. This means placing devices in many locations far removed from traditional Internet infrastructure, or even cellular infrastructure. Maintenance and repair vehicles are mobile. Again, an environment in which network connectivity is not guaranteed end-to-end.

These are just a few of the emerging markets, use cases and environments in the IoT field. Most of the standards work in these areas have addressed the need for low power wireless communications for different applications. Efforts to deal with disruption and delay have been proprietary, and do not have the foundational assumption that the network will be partitioned in some way that disrupts end-to-end communication. As an emerging IETF standard, DTN presents an opportunity to provide consistent network automation enabling the expansion of the IoT into constrained network environments.


Our next blog will dive a little deeper into the IoT, highlighting how DTN could facilitate the communication between the edge of the Internet and the often network-constrained “Things” that need to connect with it.


This blog is a product of the usual suspects: Scott Burleigh (NASA/JPL); Keith Scott (Mitre Corp./CCSDS); E. Jay Wyatt (NASA/JPL) and Mike Snell (IPNSIG)


We’ve been hearing that there are some within NASA advocating for the use of TCP/IP for space data communications on the Gateway lunar missions. At first glance, this might seem to make sense— after all, the Near-Rectilinear Halo Orbit planned for Gateway would guarantee that its line-of-sight with earth would never be interrupted. While it would seem that RTT between ground stations on earth and Gateway’s communications relay module would be problematic for latency-sensitive applications, TCP/IP would probably work. Kind of.

However, we think it would be ill advised for reasons other than interactive voice and video performance issues: the whole point of Gateway is to meet the objectives laid out in Space Policy Directive-1:


Beginning with missions beyond low-Earth orbit, the United States will lead the return of humans to the Moon for long-term exploration and utilization, followed by human missions to Mars and other destinations.


If the reason for Gateway is to support eventual human missions to Mars and other interplanetary destinations, it makes no sense to use anything other than the communication protocols designed to support those missions: in particular the DTN suite. Benefits resulting from the use of DTN for Gateway include:


  • Gaining experience with a protocol suite suitable for further exploration beyond lunar orbit.

  • Built-in resilience to communications disruptions that do occur, e.g. due to weather effects on Earth if Ka-band or optical links are used.

  • More efficient use of the Gateway-to-Ground links, since Bundle Protocol convergence layers can be tuned for the characteristics of the links (e.g. using alternate congestion control mechanisms and hence avoiding the issues with running TCP over high bandwidth-delay-product paths).

  • Flexibility in the use of ground stations to communicate with Gateway, especially when humans are not present. DTN’s store-and-forward capability will enable more flexible allocation of ground stations to communicate with Gateway, since data will be stored if all available ground stations need to be used for other tasks.


Because so many operational benefits accrue from the use of DTN vs. TCP/IP, and because DTN aligns with the overall objectives specified in Space Policy Directive-1, we at IPNSIG highly recommend that NASA elect to use DTN for all space data communications on the Gateway missions.

IPNSIG board member Vint Cerf brought this nice article about DTN to our attention…


It’s slanted towards a general audience, and provides some historical background of DTN development. It emphasizes the need to maintain the open architecture of DTN, and stresses the importance of the security features being built into the protocol suite (for more information about the Bundle Security Protocol and how it works, see the security section of the DTN Primer at: DTN_Tutorial_v3.2).


The article also features excerpts of interviews with long-time IPNSIG friends Leigh Torgerson (NASA/JPL) and David Israel (NASA Goddard SFC).

bottom of page