Satellite Radar
Satellite Radar is an effective tool for detecting vessels using the reflection of radar waves to image the earth and detect objects. The technical term for Satellite Radar is Synthetic Aperture Radar. You may also see/hear it referred to as “SAR” or “Sat-SAR”.
How it works: Satellite Radar is is a remote sensing technology that uses radar to create digital images of the Earth's surface. Radar is an active sensor that beams energy toward Earth from a satellite and the reflected signal, known as backscatter, is collected. This data is processed to form an image as though you were we looking down from space. From these images, Skylight uses Machine Learning to pick out likely vessels from the image.
Metal objects are most reliably detected by radar, though wooden and fiberglass vessel are also detected depending on vessel size and environmental conditions.
Value / Challenges: Satellite Radar is able to see through clouds to detect objects, unlike some other sensors (e.g., optical imagery).
A challenge to Satellite Radar is that image do not typically provide significant detail, at least at commonly available resolutions, to accurately gauge a vessels length within 10 or so meters. There are instances where debris or other non-vessel objects can generate detections, but this is relatively uncommon.
Additional resources for in-depth information on Satellite Radar
- Capella Space - SAR 101: An Introduction to Synthetic Aperture Radar
- Alaska Satellite Facility - Introduction to SAR
- U.S. Dept of Energy, Office of Scientific and Technical Information - Leveraging the Information in the Shadows of Synthetic Aperture Radar
Satellites and resolution
Skylight currently processes one source of regularly available Satellite Radar. This is from the Sentinel-1 from the European Space Agency's Copernicus program.
The full image of from Sentinel-1 data is available on the Sentinel-hub browser as well as the Copernicus browser. You can track the paths of these satellites on Spectator Earth.
Source: Sentinel-1
Sentinel-1 has a single satellite (Sentinel-1A). A second satellite (Sentinel-1B) failed in late 2021. Another satellite (Sentinel-1C) is scheduled to go into orbit in December 2024.
Key stats
- Coverage: At least partial coverage of most continental EEZs
- Resolution: 10 meters
- Latency: 3-6 hours
- Revisit Rate: 12 days
Skylight only processes the IW mode (Interferometric Wide Swath). Skylight does not process the EW mode (Extra Wide Swath), WV mode (Wave) or the SM mode (Stripmap). More information here.
Coverage
The Sentinel-1 satellites collects data from many continental EEZs. See image below of the frames indicating the total coverage available and processed by Skylight. These are all the relevant frames to the maritime space (i.e. non-terrestrial).
Each frame is approximately ~200 km x 250 km.
Resolution
The resolution of Sentinel-1 is 10 meters. Each pixel in the associated image is about 10m x 10m.
The image chip created for each vessel detection is 1280m x 1280m (less than 1% of the total image frame). This reference can help estimate the approximate size of the vessel.
Images may also appear elongated, or smeared. This can be due to how a a radar's signal reflects back to the satellite (backscatter) while it passes overhead. See image.
Latency
The average latency (delay from time of imaging) for Satellite Radar from Sentinel-1 is 3-6 hours.
This range includes the time for the satellite to capture the image, send the image to earth, and then for Skylight to process the data. The time to send the image from the satellite to earth accounts for a majority of the latency.
The chart below is a 2 week sample. Note that the day-to-day average latency is mostly between 3-6 hours, but sometimes as short as 2 hours and sometimes more than 7 hours.
Revisit rate
The Sentinel-1A satellite is able to collect the same image about once every 12 days. The satellite's path crosses in some locations (e.g., part of the Caribbean, Mediterranean) that result in more frequently available data, though these are coming from different swaths/paths.
Model information
Sentinel-1
Skylight uses a computer vision model to look for vessels inside Sentinel-1 data. A vessel detection is not a guarantee that the object is a vessel. This can only be confirmed with eyes on the water.
The computer vision model was trained by being given a lot of examples of what vessels in satellite radar look like, according to expert annotators who are familiar with maritime imagery. For more technical details about the computer vision model, see this page.
The Sentinel-1 vessel detection model is based upon the classic object detection model, Faster R-CNN, which is a two stage detector made up of a Region Proposal Network (RPN) and a classification head. The RPN’s job is to generate proposals, where there is likely an object of interest, and the classification head grabs the most confident of these proposals and predicts scores (such as is_vessel) for each proposal.
The model detects vessels and predicts vessel attributes from Sentinel-1 SAR images. In particular, it uses the dual polarization mode (VV + VH) of the Interferometric Wide swath (IW) acquisition mode of Sentinel-11, and produces point detections of vessels, cropped outputs surrounding those detections, and attributes associated with the detected vessel; currently estimated length is displayed in Skylight).
Training Data
The current version of the model has been trained on Sentinel-1 scenes from several geographic areas, mostly near coast, that were annotated by hand by subject matter experts. A total of 55,499 point labels were used. These areas are distributed globally and shown in the image below.
The model predicts the positions of vessels, and assigns each prediction a confidence score. Detections with a confidence score > .9 are displayed in Skylight.
To distinguish moving ships from static objects like islands, platforms. and other static non-vessel structures, the model is trained on data that includes overlapping images captured at different times. The model compares these images and learns to disregard static images.
Validation Data
The validation set is data held back from training specifically so it can be used to validate the model. For the Skylight Sentinel-1 model a total of 6156 point labels were used for validation. The global distribution for the validation set is roughly the same as the training set.
Was this article helpful?