Telemetry Database Service¶
KubOS utilizes a SQLite database to store telemetry data generated by the hardware and payload services until it is requested for transmission by the ground station.
SQLite only allows one process to make changes to a database at a time, so the telemetry database service acts as a single point of contact for interacting with the underlying telemetry database.
Configuration¶
The service has the following available configuration parameters which may be specified in the config.toml file:
database
- (Default: “/home/system/kubos/telemetry.db”) The path to the telemetry database file. The file will be created if it does not already exist.
[telemetry-service.addr]
ip
- The IP address of the serviceport
- The port the service will listen on for GraphQL requests over HTTP
Interface Details¶
Specific details about the available GraphQL queries can be found in the telemetry database service Rust docs.
Benchmark¶
The Kubos repo contains a database benchmark project which we have used to measure various behaviors of the telemetry database service.
Because each OBC has its own unique system resources, we recommend compiling and running the test project on your OBC to obtain the most accurate results.
When run on a Beaglebone Black, we gathered the following benchmark statistics:
/home/kubos # ./db-test -c tlmdb-config.toml -i 1000
NAME | Avg (us) | Total (us) |
---|---|---|
local_db_api_insert | 50460 | 50460353 |
local_db_api_insert_bulk | 213 | 213957 |
remote_gql_insert | 64356 | 64356103 |
remote_gql_insert_bulk | 9608 | 9608876 |
remote_udp_insert | 87 | 87930 |
In summary:
Sending UDP request takes ~87 microseconds
- This means that a client can send UDP requests up to a rate of 11,494 requests per second, if they don’t wait for a response. Note: This is far faster than the rate at which the service processes requests, meaning that packets will be dropped if this maximum speed is used.
Individual telemetry database inserts take ~50 milliseconds per entry, while bulk telemetry database insertions take 213 microseconds per entry on average.
Individual GraphQL inserts (including GraphQL receive request, database insert, and GraphQL send response) take ~64 milliseconds per entry, while bulk inserts (many entries at once) take 9.6 milliseconds per entry.
- This means that the GraphQL service can process roughly 104 database insert requests per second, while providing acknowledgement and transaction status.
Querying the Service¶
The telemetry
query can be used to fetch a certain selection of data from the telemetry database.
It will return an array of database entries.
The query has the following schema:
query {
telemetry(timestampGe: Float, timestampLe: Float, subsystem: String, parameter: String, parameters: [String], limit: Integer): [{
timestamp: Float!
subsystem: String!
parameter: String!
value: String!
}]
}
Each of the query arguments acts as a filter for the database query:
- timestampGe - Return entries with timestamps occurring on or after the given value
- timestampLe - Return entries with timestamps occurring on or before the given value
- subsystem - Return entries which match the given subsystem name
- parameter - (Mutually exlusive with
parameters
) Return entries which match the given parameter name- parameters - (Mutually exlusive with
parameter
) Return entries which match any of the given parameter names- limit - Return only the first n entries found
Note: timestampGe
and timestampLe
can be combined to create a timestamp selection range.
For example, entries with timestamps after 1000
, but before 5000
.
Saving Results for Later Processing¶
Immediate, large query results might consume more downlink bandwidth than is allowable. Alternatively, downlink and uplink could be asynchronous from each other.
In this case, we can use the routedTelemetry
query to write our results to an on-system file.
This way, we can choose the specific time at which to downlink the results using the
file transfer service. Additionally, by default, the output file will be in a
compressed format, reducing the amount of data which needs to be transferred.
The query has the following schema:
query {
telemetry(timestampGe: Float, timestampLe: Float, subsystem: String, parameter: String, parameters: [String], output: String!, compress: Boolean = true): String!
}
The output
argument specifies the output file to write the query results to. It may be a relative or absolute path.
The compress
argument specifies whether the service should compress the output file after writing the results to it.
The other arguments are the same as in the telemetry
query.
The query will return a single field echoing the file that was written to.
If the compress
argument is true (which is the default), then the result will be the output file name suffixed with “.tar.gz” to indicate
that the file was compressed using Gzip.
The results file will contain an array of database entries in JSON format.
This matches the return fields of the telemetry
query.
Adding Entries to the Database¶
The insert
mutation can be used to add an entry to the telemetry database.
It has the following schema:
mutation {
insert(timestamp: Float, subsystem: String!, parameter: String!, value: String!): {
success: Boolean!,
errors: String!
}
}
The timestamp
argument is optional. If it is not specified, one will be generated based on the current system time,
in fractional seconds.
Adding Multiple Entries to the Database¶
The insertBulk
mutation can be used to add multiple entries to the telemetry database at the
same time. It has the following schema:
type InsertEntry {
timestamp: Float,
subsystem: String!,
parameter: String!,
value: String!
}
mutation {
insertBulk(timestamp: Float, entries: [InsertEntry!]!): {
success: Boolean!,
errors: String!
}
}
Each individual telemetry entry has an optional timestamp
field. If it is not specified, the optional
timestamp
argument to this function will be used if it is specified, otherwise one will be
generated based on the current system time in fractional seconds.
For example, to insert multiple telemetry data points all with the same current system time:
mutation {
insertBulk(entries: [
{ subsystem: "adcs", parameter: "voltage", value: "3.3" },
{ subsystem: "eps", parameter: "voltage", value: "5.0" },
{ subsystem: "obc", parameter: "cpu", value: "45.1" }
])
}
Or to insert multiple entries with a single pre-generated timestamp:
mutation {
insertBulk(
timestamp: 1559594402.0,
entries: [
{ subsystem: "adcs", parameter: "voltage", value: "3.3" },
{ subsystem: "eps", parameter: "voltage", value: "5.0" },
{ subsystem: "obc", parameter: "cpu", value: "45.1" }
])
}
Limitations¶
The generated timestamp value will be the current system time in fractional seconds.
The database uses the combination of timestamp
, subsystem
, and parameter
as the primary key.
This primary key must be unique for each entry.
Adding Entries to the Database Asynchronously¶
If you would like to add many entries to the database quickly, and don’t care about verifying that the request
was successful, the service’s direct UDP port may be used.
This UDP port is configured with the direct_port
value in the system’s config.toml
file.
Individual or bulk insert requests should be sent as single UDP messages in JSON format.
Individual requests have the following schema:
{
"timestamp": Float,
"subsystem": String!,
"parameter": String!,
"value": String!,
}
The timestamp
argument is optional (one will be generated based on the current system time), but the other parameters are all required.
For example:
{
"subsystem": "eps",
"parameter": "voltage",
"value": "3.5"
}
If the top-level is a JSON array, the request is a bulk insert. Timestamps are again optional. Be sure to verify that the JSON message will fit in a single UDP packet. (The default UDP payload size limit is 4096 bytes.) Here is a bulk insert example:
[
{
"timestamp": Float,
"subsystem": String!,
"parameter": String!,
"value": String!,
},
...
]
Limitations¶
The generated timestamp value will be the current system time in fractional seconds.
The database uses the combination of timestamp
, subsystem
, and parameter
as the primary key.
This primary key must be unique for each entry.
This asynchronous method sends requests to the telemetry database service much more quickly than time needed for the service to process each request. The service’s direct UDP socket buffer can store up to 256 packets at a time.
- As a result, no more than 256 messages should be sent (from any and all sources) using this direct method in the time period required for the service to process them (this can be calculated by multiplying 256 by the amount of time required to process a single message. See the Benchmark section for more information).
The service processes requests from both the direct UDP method and the traditional GraphQL method one at a time, rather than simultaneously.
- As a result, if the service is receiving requests from both methods at the same time, the time period required to process 256 direct UDP messages should be doubled.
Removing Entries from the Database¶
The delete
mutation can be used to remove a selection of entries from the telemetry database.
It has the following schema:
mutation {
delete(timestampGe: Float, timestampLe: Float, subsystem: String, parameter: String): [{
success: Boolean!,
errors: String!,
entriesDeleted: Integer
}]
}
Each of the mutation arguments acts as a filter for the database query:
- timestampGe - Delete entries with timestamps occurring on or after the given value
- timestampLe - Delete entries with timestamps occurring on or before the given value
- subsystem - Delete entries which match the given subsystem name
- parameter - Delete entries which match the given parameter name
The mutation has the following response fields:
- success - Indicates whether the delete operation was successful
- errors - Any errors encountered by the delete operation
- entriesDeleted - The number of entries deleted by the operation