Providing Data¶
The WebCOOS project supports ingesting image and video data from camera devices (device ingestion) and analyses performed on the image and video camera data (analysis product ingestion).
This guide will provide a high-level overview of where to get started with integrating a camera or analysis product with the WebCOOS data infrastructure.
Media Ingestion¶
The WebCOOS project supports the following ingestion pathways for camera devices producing images and/or videos:
RTSP and RTMP Streaming¶
The WebCOOS project supports the ingestion of real-time video stream data from devices that support RTSP (Real Time Streaming Protocol) or RTMP (Real Time Messaging Protocol). These protocols allow Axiom to capture a live video stream, which can then be reused to produce the following:
Archived, timestamped 10 minute long video files captured from the captured video stream, which can be re-formatted, compressed, or otherwise transformed to suit data consumers. These video files are archived in the WebCOOS data system and made available through the WebCOOS distribution APIs and website.
Archived, timestamped images files taken every minute from the captured video stream. These image files are archived in the WebCOOS data system and made available through the WebCOOS distribution APIs and website.
Streamed, live video consumable by adaptive streaming clients (video playing devices that can detect and choose the best video quality for their hardware and bandwidth). WebCOOS offers streaming in HLS and DASH for every camera device stream being captured. The streaming video from the feeds can be viewed in the WebCOOS website or integrated easily with external applications and websites.
See the requirements section for Streaming to start identifying the necessary information for RTSP stream ingestion.
If you have an RTSP or RTMP-enabled camera that you would like to integrate with the WebCOOS data system please get in touch!
S3 Uploading¶
The WebCOOS project supports the ingestion of video clips or still images from data providers. Providers can upload video clips of any length (i.e. 10 minute intervals) and still images from a camera taken at any frequency (i.e. every 15 minutes). These video and image files are archived in the WebCOOS data system and made available through the WebCOOS distribution APIs and website. This type of data ingestion is best for historical cameras no longer producing data or cameras that need to store data locally on the device or server and upload periodically due to bandwidth issues at the install location.
Uploads made to WebCOOS are done through a S3 API. In addition to some vendor cameras supporting uploading directly to an S3 API out-of-the-box, WebCOOS can provide example programs to assist in the upload of archive video and image data.
See the requirements for Uploading to start identifying the necessary information for RTSP stream ingestion.
If you have video or image files from cameras you would like to upload into the WebCOOS data system please get in touch!
External S3 Indexing¶
If you have a large collection of image or video data already under your management and have the ability to host it through an S3 compatible API, WebCOOS may be able to externally index all of your video and image data and provide access to it through the WebCOOS data system. Please see the requirements for External Indexing and get in touch for more information!
Analysis Product Ingestion¶
WebCOOS support ingesting analysis products of three types: media (images/videos), time-series data, and object detection data.
Image / Video¶
Similar to Media Ingestion, analysis products can be ingested into WebCOOS as images and videos by uploading them via an S3 API or providing WebCOOS an external S3 endpoint to access and index.
Please see the requirements for Devices, Uploading, Indexing, and get in touch for more information!
Time-series¶
If you analysis produces time-series like data, WebCOOS can ingest the results. Time-series data submitted to WebCOOS is made available through a filterable API.
Note
Time-series data must be anchored at an existing WebCOOS asset and therefore have a static location (latitude, longitude) and height.
To submit time-series data to WebCOOS, it must be in the JSON Lines file format and each line in the files must adhere to the WebCOOS time-series JSON schema as follows:
{
"uid": str, # unique ID of the time-series
"type": str, # measurement, count, action, or flag
"param": str, # CF standard name (if possible)
"time": str, # datetime (ISO8601)
"unit": str, # UCUM compatible string
"tags": [str], # namespaced tags
"value": float, # data value
}
key |
description |
---|---|
uid |
A unique string assigned to your specific time-series data. This value is for you to use to establish uniqueness of each submitted time-series. A combination of the |
type |
The type of time-series measurement this is. |
param |
A string describing the parameter that was measured. For |
time |
An ISO8601 datetime string describing the instantaneous time of the measurement of occurrence. |
unit |
Only used when |
tags |
Tags can be used to identify objects or other features you would like to query on through the WebCOOS API in the future. For example, if you have identified a specific object in your analysis and track it over multiple inputs, you can tag each time-series value as pertaining to that object. (i.e. |
value |
The numeric value of the time-series data. |
Examples¶
measurement
¶
Water Depth. The uid
is set to rosemontpeonie:wl_sensor_1
to capture the idea that there may be one than one measurement of the same type
and param
at this location.
{
"uid": "rosemontpeonie:wl_sensor_1",
"time": "2022-09-29T13:00:00Z",
"type": "measurement",
"param": "height_between_sensor_and_water_surface",
"unit": "cm",
"value": 275.2
}
{
"uid": "rosemontpeonie:wl_sensor_1",
"time": "2022-09-29T14:00:00Z",
"type": "measurement",
"param": "height_between_sensor_and_water_surface",
"unit": "cm",
"value": 313.4
}
flag
¶
Severity of a rip current and wave run up event
{
"uid": "currituck_sailfish",
"time": "2022-09-29T13:00:00Z",
"type": "flag",
"param": "rip_current_severity",
"value": 3
}
{
"uid": "currituck_sailfish",
"time": "2022-09-29T14:00:00Z",
"type": "flag",
"param": "rip_current_severity",
"value": 1
}
{
"uid": "currituck_sailfish",
"time": "2022-09-29T14:00:00Z",
"type": "flag",
"param": "wave_run_up",
"value": 2
}
action
¶
Car Actions
{
"uid": "north_inlet",
"time": "2022-09-29T13:00:00Z",
"type": "action",
"param": "entrance",
"tags": [
"object:type:car:id:551",
"object:type:entrance:id:46"
],
"value": 1
}
{
"uid": "north_inlet",
"time": "2022-09-29T14:00:00Z",
"type": "action",
"param": "exit",
"tags": [
"object:type:car:id:551",
"object:type:entrance:id:46"
],
"value": 1
}
count
¶
People Counting
{
"uid": "follypiersouthcam",
"time": "2022-09-29T13:00:00Z",
"type": "count",
"param": "person",
"value": 4
}
{
"uid": "follypiersouthcam",
"time": "2022-09-29T14:00:00Z",
"type": "count",
"param": "person",
"value": 2
}
Bird Counting. The param
is set to gull
because the unique timeseries feed will be defined as [uid]:[type][param]
in WebCOOS (follypiersouthcam:count:gull
). If the param
was bird
and you counted more than one species of bird
, the time-series records of follypiersouthcam:count:bird
would overwrite each other for each species that was counted at the same time
. The below will give you the ability to count many different species of birds and they will be queryable through the object:type:bird
tag in the WebCOOS API.
{
"uid": "follypiersouthcam",
"time": "2022-09-29T13:00:00Z",
"type": "count",
"param": "gull",
"tags": [
"object:type:bird",
"object:type:bird:species:gull"
],
"value": 4
}
{
"uid": "follypiersouthcam",
"time": "2022-09-29T13:00:00Z",
"type": "count",
"param": "sparrow",
"tags": [
"object:type:bird",
"object:type:bird:species:sparrow"
],
"value": 7
}
Object Detections¶
WebCOOS can accept object detection results in the form of COCO formatted JSON documents. Each COCO formatted analysis result must reference the public WebCOOS URL that the image or video is accessible from. This is so WebCOOS can piece back together the original image and the analysis results to produce visualization products and so the analysis is assigned to a specific image and video internally.
Note
If you are not familiar with the COCO data format we recommend watching and read this tutorial. The below information assumes you have a basic understanding of the format!
Rip Current Detection¶
This rip detection example annotates :
(2)
segments
identified as rip currentsthe
bbox
of both areasthe
keypoints
of all of the rip detections
{
"info": {
"year": "2022",
"version": "1",
"description": "RipFlow",
"contributor": "Alex Pang",
"url": "https://github.com/webcoos/ripflow",
"date_created": "2020-11-14T00:00:00+00:00"
},
"categories": [
{
"id": 0,
"name": "ripcurrent",
"supercategory": "none"
}
],
"annotations": [
{
"id": 0,
"image_id": 0,
"category_id": 0,
// This defines polygons that identify the areas rip currents have been detected
// Optional. List of points that define the shape of the object
// i.e. the bounding polygon, drawn in order and shaded in.
// Array of 2X (x, y) pixels.
"segmentation": [
[
45, 2,
48, 10,
48, 4,
45, 2
],
// Can be split into multiple polygons if need be
[
48, 4,
55, 10,
55, 3,
48, 4
]
],
// Required. BBox around the deteced object.
"bbox": [
45, // top left x pixel
2, // top left y
10, // width
20 // height
],
// Area of the segmentation polygons (if exists) or bbox, in pixels
"area": 200,
// 0=single, 1=group
"iscrowd": 0,
// This defines the points within the image that rip currents have
// been detected.
// Array of 3X: (x, y, visible flag)
// where the visible flag is:
// 0: not labeled
// 1: labeled but not visible
// 2: labeled and visible
"keypoints": [
45, 2, 2,
46, 4, 2,
48, 10, 2,
46, 7, 2
],
"num_keypoints": 4
},
// Captions can be added as annotations as needed
{
"id": 1,
"image_id": 0,
"caption": "This is a rip!"
}
],
"images": [
{
"id": 0,
"license": 1,
"width": 1200,
"height": 800,
"file_name": "currituck_hampton_inn-2022-11-16-141239Z.jpg",
"date_captured": "2022-11-16T14:12:39+00:00",
"flickr_url": "https://s3.us-west-2.amazonaws.com/webcoos/media/sources/webcoos/groups/noaa/assets/currituck_hampton_inn/feeds/raw-video-data/products/one-minute-stills/elements/2022/11/16/currituck_hampton_inn-2022-11-16-141239Z.jpg"
}
],
"licenses": [
{
"id": 1,
"url": "https://creativecommons.org/publicdomain/zero/1.0/",
"name": "Public Domain"
}
]
}
Referencing Video Frames¶
By default, COCO does not support annotating videos. WebCOOS is allowing a small changes to the COCO data format to support referencing a specific image frame contained in a video. In each image
definition, you may include a value for frame
which indicates that all annotations that reference that image block are referencing that specific frame inside of the video. The images
block can be defined to capture analysis from many frames in a video:
{
// ...
"images": [
{
"id": 0,
"license": 1,
"width": 1200,
"height": 800,
"file_name": "buxtoncoastalcam.2020-12-11_1400.mp4",
"date_captured": "2020-12-11T18:00:00+00:00",
"flickr_url": "https://s3.webcat.axds.co/surfline/buxtoncoastalcam/raw/2020/2020_12/2020_12_11/buxtoncoastalcam.2020-12-11_1400.mp4",
"frame": 30
},
{
"id": 1,
"license": 1,
"width": 1200,
"height": 800,
"file_name": "buxtoncoastalcam.2020-12-11_1400.mp4",
"date_captured": "2020-12-11T18:00:00+00:00",
"flickr_url": "https://s3.webcat.axds.co/surfline/buxtoncoastalcam/raw/2020/2020_12/2020_12_11/buxtoncoastalcam.2020-12-11_1400.mp4",
"frame": 573
},
],
// ...
}