[installation] Init with inital config for global

This commit is contained in:
2025-10-30 15:08:17 +01:00
commit 7640b452ed
3678 changed files with 2200095 additions and 0 deletions

997
share/doc/caps/CHANGELOG Normal file
View File

@ -0,0 +1,997 @@
# Change Log
All notable changes to CAPS will be documented in this file.
Please note that we have changed the date format from year-month-day
to year.dayofyear to be in sync with `caps -V`.
## 2025.232
- Fix data retrieval at the beginning of a year with archive files that start
after the requested start time but at the same day.
## 2025.199
- Fix station lookup in web application v2. This bug lead to stations symbols
placed in a arbitrarily fixed grid and wrong plots.
- Add preferred nodalplane to focalmechanism page in OriginLocatorView v2.
## 2025.135
- Fix datafile header CRC computation.
## 2025.128
- Relax NSLC uppercase requirement for FDSNWS dataselect request.
## 2025.112
- Fix crash in combination with `caps --read-only`.
## 2025.101
- Add option `AS.filebase.params.concurrency` to write to the archive
concurrently multi-threaded. This can improve performance with some
storage technologies such as SSD / NVMe under very high load or with
high latency storage devices such as network connected storages under
moderate load.
- Optimized write performance by reducing and combining page updates.
## 2024.290
- Add Option to purge data via CAPS protocol API. Only users with the `purge`
permission can delete data from archive.
## 2024.269
- Fixed crash on inserting data under some still unclear circumstances.
## 2024.253
- Add more robust checks to detect corrupted files caused by, e.g.
faulty storages or hardware failures/crashes. Corrupt files could have
caused segmentation faults of `caps`.
## 2024.215
- Fix webfrontend bug if `AS.http.fdsnws` is specified. This
bug prevented the webfrontend to load.
## 2024.183
- Add record filter options to rifftool data dump mode
## 2024.151
- Improve logging for plugin port: add IP and port to disconnect
messages and log disconnection requests from the plugin to
INFO level.
## 2024.143
- Fix issue with merging raw records after a restart
## 2024.096
- Attempt to fix dashboard websocket standing connection counter
## 2024.094
- Fix errors when purging a datafile which is still active
## 2024.078
- Ignore records without start time and/or end time when
rebuilding the index of a data file.
## 2024.066
- Ignore packets with invalid start and/or end time
- Fix rifftool with respect to checking data files with
check command: ignore invalid times.
- Add corrupted record and chunk count to chunks command
of rifftool.
## 2024.051
- Fix frontend storage time per second scale units
- Fix frontend real time channel display update
- Fix overview plot update when locking time time range
## 2024.047
- Update frontend
## 2024.024
- Update frontend
## 2024.022
- Add support for additional web applications to be integrated
into the web frontend
## 2023.355
- Update web frontend
- Close menu on channels page on mobile screens
if clicked outside the menu
## 2023.354
- Update web frontend
- Improve rendering on mobile devices
## 2023.353
- Update web frontend
- Server statistics is now the default page
- The plot layout sticks the time scale to the bottom
- Bug fixes
## 2023.348
- Add support for `info server modified after [timestamp]`
- Update web frontend
## 2023.347
- Some more internal optimizations
## 2023.346
- Fix bug in basic auth implementation that caused all clients to disconnect
when the configuration was reloaded.
## 2023.331
- Correct system write time metrics
## 2023.328
- Extend notification measuring
## 2023.327
- Fix crash with `--read-only`.
- Improve input rate performance with many connected clients.
## 2023.326
- Internal optimization: distribute notification handling across multiple
CPUs to speed up handling many connections (> 500).
- Add notification time to storage time plot
## 2023.325
- Internal optimization: compile client session decoupled from notification
loop.
## 2023.321
- Decouple data disc storage from client notifications. This will increase
performance if many real-time clients are connected. A new parameter has
been added to control the size of the notification queue:
`AS.filebase.params.q`. The default value is 1000.
## 2023.320
- Add file storage optmization which might be useful if dealing with a large
amount of channels. In particular `AS.filebase.params.writeMetaOnClose` and
`AS.filebase.params.alignIndexPages` have been added in order to reduce the
I/O bandwidth.
- Add write thread priority option. This requires the user who is running
CAPS to be able to set rtprio, see limits.conf.
## 2023.312
- Do not block if inventory is being reloaded
## 2023.311
- Add average physical storage time metric
## 2023.299
- Fix storage time statistics in combination with client requests
- Improve statistics plot in web frontend
## 2023.298
- Add storage time per package to statistics
## 2023.241
- Fix protocol orchestration for plugins in combination with authentication
## 2023.170
- Add section on retrieval of data availability to documentation
## 2023.151
- Fix crash in combination with invalid HTTP credential format
## 2023.093
- Add note to documentation that inventory should be enabled in combination
with WWS for full support.
## 2023.062
- Add documentation of rifftool which is available through the separate
package 'caps-tools'.
## 2023.055
- Internal cleanups
## 2023.024
- Fix crash if requested heli filter band is out of range
- Improve request logging for heli requests
## 2023.011
### Changed
- Change favicon, add SVG and PNG variants
## 2023.011
### Fixed
- Client connection statistics
## 2023.010
### Fixed
- Crash in combination with websocket data connections
## 2023.004
### Fixed
- Reload operation with respect to access changes. Recent versions
crashed under some circumstances.
## 2022.354
### Added
- Show connection statistics in the frontend
## 2022.349
### Changed
- Improved read/write schedular inside CAPS to optimize towards
huge number of clients
## 2022.346
### Fixed
- Fixed statistics calculation in `--read-only` mode.
## 2022.342
### Added
- Read optional index files per archive directory during startup which
allow to skip scanning the directory and only rely on the index information.
This can be useful if mounted read-only directories should be served and
skipped from possible scans to reduce archive scan time.
## 2022.341
### Changed
- Improve start-up logging with respect to archive scanning and setup.
All information go to the notice level and will be logged irrespective
of the set log level.
- Add configuration option to define the path of the archive log file,
`AS.filebase.logFile`.
## 2022.334
### Fixed
- Fixed bug which prevented forwarding of new channels in combination
with wildcard requests.
## 2022.333
### Changed
- Improve websocket implementation
## 2022.332
### Changed
- Increase reload timeout from 10 to 60s
## 2022.327
### Fixed
- Fixed invalid websocket frames sent with CAPS client protocol
- Fixed lag in frontend when a channel overview reload is triggered
## 2022.322
### Added
- Added system error message if a data file cannot be created.
- Try to raise ulimit to at least cached files plus opened files
and terminate if that was not successful.
## 2022.320
### Fixed
- Fixed storage of overlapping raw records which overlap with
gaps in a data file.
## 2022.314
### Fixed
- Fixed trimming of raw records while storing them. If some
samples were trimmed then sometimes raw records were merged
although the do not share a common end and start time.
## 2022.307
### Fixed
- Fixed deadlock in combination with server info queries
## 2022.284
### Fixed
- Fixed segment resolution evaluation in frontend
## 2022.278
### Fixed
- Fixed memory leak in combination with some gap requests
## 2022.269
### Fixed
- Memory leak in combination with request logs.
### Changed
- Removed user `FDSNWS` in order to allow consistent permissions
with other protocols. The default anonymous access is authenticated
as guest. Furthermore HTTP Basic Authentication can be used to
authenticate an regular CAPS user although that is not part of the
FDSNWS standard. This is an extension of CAPS.
If you have set up special permission for the FDSNWS user then you
have to revise them.
The rationale behind this change is (as stated above) consistency.
Furthermore the ability to configure access based on IP addresses
drove that change. If CAPS authenticates any fdsnws request as
user `FDSNWS` then IP rules are not taken into account. Only
anonymous requests are subject to IP based access rules. We do not
believe that the extra `FDSNWS` user added any additional security.
## 2022.265
### Fixed
- Crash in combination with MTIME requests.
## 2022.262
### Added
- Added modification time filter to stream requests. This allows to
request data and segment which were available at a certain time.
## 2022-09-06
### Improved
- Improved frontend performance with many thousands of channels and
high segmentation.
### Fixed
- Fixed time window trimming of raw records which prevented data delivery
under some very rare circumstances.
## 2022-09-02
### Added
- List RESOLUTION parameter in command list returned by HELP on client
interface.
## 2022-08-25
### Changed
- Allow floating numbers for slist format written by capstool.
## 2022-08-25
### Important
- Serve WebSocket requests via the regular HTTP interface. The
configuration variables `AS.WS.port` and `AS.WS.SSL.port` have
been removed. If WebSocket access is not desired then the HTTP
interface must be disabled.
- Reworked the HTTP frontend which now provides display of channel segments,
cumulative station and network views and a view with multiple traces.
- In the reworked frontend, the server statistics are only available to users
which are member of the admin group as defined by the access control file
configured in `AS.auth.basic.users.passwd`.
## 2022-08-16
### Added
- Open client files read-only and only request write access if the index
needs to be repaired or other maintenance operations must be performed.
This makes CAPS work on a read-only mounted file system.
## 2022-07-12
### Fixed
- Fixed HELI request with respect to sampling rate return value.
It returned the underlying stream sampling rate rather than 1/1.
## 2022-06-10
### Fixed
- Improve bad chunk detection in corrupt files. Although CAPS is
pretty stable when it comes to corrupted files other tools might
not. This improvement will trigger a file repair if a bad chunk
has been detected.
## 2022-06-07
### Fixed
- Infinite loop if segments with resolution >= 1 were requested.
## 2022-05-30
### Added
- Add "info server" request to query internal server state.
## 2022-05-18
### Fixed
- Fix possible bug in combination with websocket requests. The
issue exhibits as such as the connection does not respond anymore.
Closing and reopening the connection would work.
## 2022-05-09
### Added
- Add gap/segment query.
## 2022-04-26
### Important
- With this release we have split the server and the tools
- riffdump
- riffsniff
- rifftest
- capstool
into separate packages. We did this because for some use cases
it make sense to install only these tools. The new package is
called `caps-tools` and activated for all CAPS customers.
## 2022-03-28
### Changed
- Update command-line help for capstool.
## 2022-03-03
### Added
- Log plugin IP and port on accept.
- Log plugin IP and port on package store error.
## 2021-12-20
### Added
- Explain record sorting in capstool documentation.
## 2021-11-09
### Fixed
- Fixed helicorder request in combination with filtering. The
issue caused wrong helicorder min/max samples to be returned.
## 2021-10-26
### Fixed
- Fixed data extraction for the first record if it does not
intersect with the requested time window.
## 2021-10-19
### Changed
- Update print-access help page entry
- Print help page in case of unrecognized command line options
### Fixed
- Do not print archive stats when the help page or version information is
requested
## 2021-09-20
### Fixed
- Fixed crash if an FDSNWS request with an empty compiled channel list has been
made
## 2021-09-17
### Added
- New config option `AS.filebase.purge.referenceTime` defining which reference
time should be used while purge run. Available are:
- EndTime: The purge run uses the end time per stream as reference time.
- Now: The purge run uses the current time as reference time.
By default the purge operation uses the stream end time as reference time.
To switch to **Now** add the following entry to the caps configuration.
```config
AS.filebase.purge.referenceTime = Now
```
## 2021-05-03
### Changed
- Log login and logout attempts as well as blocked stream requests to request
log.
- Allow whitespaces in passwords.
## 2021-04-15
### Fixed
- Rework CAPS access rule evaluation.
### Changed
- Comprehensive rework of CAPS authentication feature documentation.
## 2021-03-11
### Important
- Reworked data file format. An high performance index has been added to the
data files which require an conversion of the data files. See CAPS
documentation about upgrading. The conversion is done transparently in the
background but could affect performance while the conversion is in progress.
## 2020-10-12
### Added
- Provide documentation of the yet2caps plugin.
## 2020-09-04
### Fixed
- Fixed gaps in helicorder request.
## 2020-07-01
### Fixed
- Don't modify stream start time if the assoicated data file
couldn't deleted while purge run. This approach makes sure that
stream start time and the data files are kept in sync.
## 2020-02-24
### Added
- Extended purge log. The extended purge log can be enabled with
the configuration parameter `AS.logPurge`. This feature is not enabled
by default.
### Changed
- Log maximum number of days to keep data per stream at start.
## 2020-01-27
### Fixed
- Typo in command line output.
## 2019-11-26
### Added
- Added new command line option `configtest that runs a
configuration file syntax check. It parses the configuration
files and either reports Syntax OK or detailed information
about the particular syntax error.
- Added Websocket interface which accepts HTTP connections
(e.g. from a web browser) and provides the CAPS
protocol via Websockets. An additional configuration will
be necessary:
```config
AS.WS.port = 18006
# Provides the Websocket interface via secure sockets layer.
# The certificate and key used will be read from
# AS.SSL.certificate and AS.SSL.key.
AS.WS.SSL.port = 18007
```
### Changed
- Simplified the authorization configuration. Instead of using one
login file for each CAPS interface we read the authentication
information from a shadow file. The file contains one line
per user where each line is of format "username:encrypted_pwd".
To encrypt a password mkpasswd can be used. It is recommended to
apply a strong algorithm such as sha-256 or sha-512. The command
"user=sysop pw=`mkpasswd -m sha-512` && echo $user:$pw"
would generate a line for e.g. user "sysop". The shadow
file can be configured with the config option `AS.users.shadow`.
Example:
```config
# The username is equal to the password
test:$6$jHt4SqxUerU$pFTb6Q9wDsEKN5yHisPN4g2PPlZlYnVjqKFl5aIR14lryuODLUgVdt6aJ.2NqaphlEz3ZXS/HD3NL8f2vdlmm0
user1:$6$mZM8gpmKdF9D$wqJo1HgGInLr1Tmk6kDrCCt1dY06Xr/luyQrlH0sXbXzSIVd63wglJqzX4nxHRTt/I6y9BjM5X4JJ.Tb7XY.d0
user2:$6$zE77VXo7CRLev9ly$F8kg.MC8eLz.DHR2IWREGrSwPyLaxObyfUgwpeJdQfasD8L/pBTgJhyGYtMjUR6IONL6E6lQN.2QLqZ5O5atO/
```
In addition to user authentication user access control properties are defined
in a passwd file. It can be configured with the config option
`AS.users.passwd`. Each line of the file contains a user name or a group
id and a list of properties in format "username:prop1,prop2,prop3".
Those properties are used to grant access to certain functionalities.
Currently the following properties are supported by CAPS: read, write.:
"read and write.".
By default a anonymous user with read and write permissions exists. Groups use
the prefix **%** so that they are clearly different from users.
Example:
```config
user1: read,write
%test: read
```
The group file maps users to different groups. Each line of the file maps
a group id to a list of user names. It can be configured with the config
option `AS.users.group`.
Example:
```config
test: user2
```
With the reserved keyword **ALL** a rule will be applied to all users.
Example:
```config
STATIONS.DENY = all
STATIONS.AM.ALLOW = user1
```
- We no longer watch the status of the inventory and the access file with
Inotify because it could be dangerous in case of an incomplete saved
configuration. A reload of the configuration can be triggered by sending a
SIGUSR1 signal to the CAPS process. Example:
```bash
kill -SIGUSR1 <pid>
```
CAPS reloads the following files, if necessary:
- shadow,
- passwd,
- access list,
- inventory.
## 2019-10-15
### Changed
- Run archive clean up after start and every day at midnight(UTC).
## 2019-10-01
### Changed
- Increase shutdown timeout to 60 s.
## 2019-05-08
### Fixed
- Fixed potential deadlock in combination with inventory updates.
## 2019-04-23
### Fixed
- Improved plugin data scheduling which could have caused increased delays
of data if one plugin transmits big amounts of data through a low latency
network connection, e.g. localhost.
## 2019-04-08
### Added
- Added new config option `AS.filebase.purge.initIdleTime` that
allows to postpone the initial purge process up to n seconds. Normally
after a start the server tries to catch up all data which
might be an IO intensive operation. In case of a huge archive the purge
operation slow downs the read/write performance of the system too. To
reduce the load at start it is a good idea to postpone this operation.
## 2019-03-29
### Added
- Added index file check during archive scan and rebuild them
if corrupt. The lack of a check sometimes caused CAPS to
freeze while starting up.
## 2018-12-11
### Added
- Added support for SC3 schema 0.11.
## 2018-10-18
### Fixed
- Spin up threads correctly in case of erroneous configuration
during life reconfiguration.
## 2018-10-17
### Fixed
- Reinitalize server ports correctly after reloading the access list. This
was not a functional bug, only a small memory leak.
## 2018-09-14
### Fixed
- High IO usage while data storage purge. In worst case the purge operation
could slow down the complete system so that incoming packets could not be
handled anymore.
## 2018-09-05
### Added
- Access rule changes do not require a restart of the server anymore.
## 2018-08-29
### Changed
- Assigned human readable descriptions to threads. Process information tools
like top or htop can display this information.
## 2018-08-08
### Changed
- Reduced server load for real-time client connections.
## 2018-05-30
### Fixed
- Fixed unexpected closed SSL connections.
## 2018-05-25
### Fixed
- Fixed high load if many clients request many streams in real-time.
## 2018-05-18
### Added
- Add option to log anonymous IP addresses.
## 2018-04-17
### Fixed
- Improved handling of incoming packets to prevent packet loss to subscribed
sessions in case of heavy load.
## 2018-03-08
### Fixed
- Fixed access list evaluator. Rather than replacing general rules with concrete
rules they are now merged hierarchically.
## 2018-02-13
### Added
- Restrict plugin stream codes to [A-Z][a-z][0-9][-_] .
## 2018-01-31
### Changed
- CAPS archive log will be removed at startup and written at shutdown. With
this approach we want to force a rescan of the complete archive in case of
an unexpected server crash.
## 2018-01-30
### Fixed
- Fixed parameter name if HTTP SSL port, which should be `AS.http.SSL.port`
but was `AS.SSL.http.port`.
## 2018-01-29
### Fixed
- Fixed caps protocol real time handler bug which caused gaps on client-side
when retrieving real time data.
## 2018-01-26
### Changed
- Log requests per CAPS server instance.
### Fixed
- Improved data scheduler to hopefully prevent clients from stalling the
plugin input connections.
## 2018-01-02
### Fixed
- Fixed bug in combination with SSL connections that caused CAPS to not
accept any incoming connections after some time.
## 2017-11-15
### Added
- Added option `AS.inventory` which lets CAPS read an SC3 inventory XML
file to be used together with WWS requests to populate channel geo locations
which will enable e.g. the map feature in Swarm.
## 2017-11-14
### Fixed
- Data store start time calculation in case of the first record start time is
greater than the requested one.
## 2017-11-08
### Fixed
- WWS Heli request now returns correct timestamps for data with gaps.
## 2017-10-13
### Fixed
- FDSN request did not return the first record requested.
## 2017-08-30
### Fixed
- Segmentation fault caused by invalid FDSN request.
- Timing bug in the CAPS WWS protocol implementation.
## 2017-06-15
### Added
- Add `AS.minDelay` which delays time window requests for the specified
number of seconds. This parameter is only effective with FDSNWS and WWS.
## 2017-05-30
### Feature
- Add experimental Winston Wave Server(WWS) support. This feature is disabled
by default.
## 2017-05-09
### Feature
- Add FDSNWS dataselect support for archives miniSEED records. This
support is implicitely enabled if HTTP is activated.
## 2017-05-03
### Feature
- Support for SSL and authentication in AS, client and HTTP transport.
## 2017-03-24
### Fixed
- MSEED support.
## 2017-03-09
### Changed
- Moved log output that the index was reset and that an incoming
record has not ignored to debug channel.
## 2016-06-14
### Added
- Added option `AS.clientBufferSize` to configure the buffer
size for each client connection. The higher the buffer size
the better the request performance.
## 2016-06-09
### Added
- Added out-of-order requests for clients. The rsas plugin with
version >= 0.6.0 supports requesting out-of-order packets with
parameter `ooo`, e.g. `caps://localhost?ooo`.
- Improved record insertion speed with out-of-order records.
## 2016-03-09
### Fixed
- Low packet upload rate.

View File

@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 484072f0396dfe790610a8a80aab04b8
tags: 645f666f9bcd5a90fca523b33c5a78b7

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 166 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 645 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 199 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 196 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 167 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 189 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 663 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 697 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 342 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 735 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 524 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 872 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 783 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 729 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 479 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 478 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 479 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 892 KiB

View File

@ -0,0 +1,957 @@
.. highlight:: rst
.. _caps:
####
caps
####
**Realtime and archive waveform server**
Module Configuration
====================
| :file:`etc/defaults/global.cfg`
| :file:`etc/defaults/caps.cfg`
| :file:`etc/global.cfg`
| :file:`etc/caps.cfg`
| :file:`~/.seiscomp/global.cfg`
| :file:`~/.seiscomp/caps.cfg`
caps inherits :ref:`global options<global-configuration>`.
.. note::
Modules/plugins may require a license file. The default path to license
files is :file:`@DATADIR@/licenses/` which can be overridden by global
configuration of the parameter :confval:`gempa.licensePath`. Example: ::
gempa.licensePath = @CONFIGDIR@/licenses
.. _AS:
.. note::
**AS.\***
*CAPS server control parameters*
.. confval:: AS.filebase
Default: ``@ROOTDIR@/var/lib/caps/archive``
Type: *string*
Defines the path to the archive directory.
.. confval:: AS.port
Default: ``18002``
Type: *int*
Defines the server port for client requests.
.. confval:: AS.clientBufferSize
Default: ``16384``
Unit: *B*
Type: *int*
Size of the client buffer in bytes. In case the client fails to read the buffered data
in time \(buffer overflow\) the connection falls back to archive requests.
.. confval:: AS.minDelay
Default: ``-1``
Unit: *s*
Type: *int*
Limits the retrieval of real\-time data. The value
specifies the maximum relative end time of the time range
to be requested. The maximum absolute end time is
now \- minDelay. This is only valid for FDSNWS and WWS.
.. confval:: AS.inventory
Type: *path*
The path to an optional inventory XML file with SeisComP3
schema. This inventory information is used by WWS to populate
the channel coordinates. In future possibly more endpoints
will make use of it.
.. confval:: AS.logRequests
Default: ``false``
Type: *boolean*
Whether to maintain a request log file or not. Each request
will be logged and partly traced.
.. confval:: AS.logAnonymousIP
Default: ``false``
Type: *boolean*
Log only parts of the IP to respect users privacy.
.. confval:: AS.logPurge
Default: ``false``
Type: *boolean*
Whether to maintain a purge log file or not. Each purge
operation will be logged.
.. confval:: AS.allow
Type: *list:string*
List of IPs which are allowed to access the caps\(s\) port.
By default access is unrestricted.
.. confval:: AS.deny
Type: *list:string*
List of IPs which are not allowed to access the caps\(s\) port.
By default access is unrestricted.
.. _AS.filebase:
.. note::
**AS.filebase.\***
*File buffer control parameters*
.. confval:: AS.filebase.logFile
Type: *path*
The path to the archive log file which contains the
stream start and end times. By default it is written
to \$filebase\/archive.log.
.. confval:: AS.filebase.keep
Default: ``*.*.*.*:-1``
Type: *list:string*
Number of days to keep data per stream ID before
\"AS.filebase.purge.referenceTime\". For
stream\-specific configuration create a list of pairs
consisting of stream ID : days. Separate pairs by
comma. The first occurrence in the list takes priority.
Example keeping all streams but AM.\* and GR.\* for 14 days:
GR.\*:\-1, AM.\*.\*.\*:365, \*.\*.\*.\*:14
Default \(empty parameter\) or \-1: keep all data forever.
.. confval:: AS.filebase.preallocationSize
Default: ``65535``
Unit: *B*
Type: *int*
Preallocation size of data files in bytes. Some file system allow to reserve
disk space for files in advance. Especially on spinning disks the read
performance will be improved if data can be read sequentially. The speed is
traded for disk space consumed by the file since its size will be a multiple
of the specified value. Set the value to 0 to disable this feature.
.. _AS.filebase.cache:
.. note::
**AS.filebase.cache.\***
*CAPS does not keep all files of all streams open. It*
*tries to keep open the most frequently used files and closes*
*all others. The more files CAPS can keep open the faster*
*the population of the archive. The limit of open*
*files depends on the security settings of the user under*
*which CAPS is running.*
.. confval:: AS.filebase.cache.openFileLimit
Default: ``250``
Type: *int*
The maximum number of open files. Because a stream
file can have an associated index file this value
is half of the physically opened files in worst case.
.. confval:: AS.filebase.cache.unusedFileLimit
Default: ``1000``
Type: *int*
Limit of cached files in total. This value affects also
files that are actually explicitly closed by the
application. CAPS will keep them open \(respecting
the openFileLimit parameter\) as long as possible and
preserve a file handle to speed up reopening the
file later.
.. _AS.filebase.params:
.. confval:: AS.filebase.params.writeMetaOnClose
Default: ``false``
Type: *boolean*
This is an optimization to write the datafile meta record only
on file close and not every time a new record has been added
to a file. To save IO bandwidth when handling many channels,
this could be helpful.
.. confval:: AS.filebase.params.alignIndexPages
Default: ``false``
Type: *boolean*
This forces to align index pages in the file at 4k boundaries.
In order to achieve that, NULL chunks must be inserted to
allow padding. This will lead to less device page updates
but slightly larger data files.
.. confval:: AS.filebase.params.priority
Default: ``0``
Type: *int*
A value greater than 0 will raise the write thread
priority to the given value. This value is in
accordance to the pthread_setschedparam function.
.. confval:: AS.filebase.params.q
Default: ``1000``
Type: *int*
The real\-time notification queue size.
.. confval:: AS.filebase.params.concurrency
Default: ``1``
Type: *int*
The number of concurrent writes to the database. The
higher the value the more concurrent write operations
are issued distributed across the files. A single file
can only be updated sequentially. This value is most
effective if many records of different channels are
pushed, like the output of scmssort.
.. _AS.filebase.purge:
.. note::
**AS.filebase.purge.\***
*Parameters controlling IO resources occupied by the purge operation.*
*The deletion of many data files at once may have a significant impact*
*on the server performance. E.g. if the server did not run for a while*
*or the keep parameter was reduced significantly, the purge operation*
*may slow down the processing of real-time data.*
.. confval:: AS.filebase.purge.referenceTime
Default: ``EndTime``
Type: *string*
Values: ``EndTime,Now``
The reference time defining the end of the time window
to keep the data. The window length is set by
\"AS.filebase.keep\".
Data outside the window will be purged. Available values:
EndTime: The reference time is the end time per stream.
This keeps older data if no more recent data arrive.
Now: The reference time is current time. This
deletes old data even if no recent data arrive.
.. confval:: AS.filebase.purge.idleTime
Default: ``5``
Unit: *s*
Type: *double*
Idle time between two purge runs.
.. confval:: AS.filebase.purge.initIdleTime
Default: ``0``
Unit: *s*
Type: *double*
Idle time before the first purge run starts. Normally
after a start the server tries to catch up all data which
might be an IO intensive operation. In case of a huge archive the purge
operation slow downs the read\/write performace of the system too. To
reduce the load at start it is a good idea to postpone this operation.
.. confval:: AS.filebase.purge.maxProcessTime
Default: ``1``
Unit: *s*
Type: *double*
Maximum processing time for one purge run. If exceeded the
purge task will pause for AS.filebase.purge.idleTime
seconds freeing IO resources.
.. confval:: AS.filebase.purge.startTime
Default: ``00:30``
Type: *string*
Time of the day when to run the daily purge run. Time is in UTC.
.. _AS.SSL:
.. note::
**AS.SSL.\***
*Parameters for SSL-based data requests*
.. confval:: AS.SSL.port
Type: *int*
Defines the SSL server port for client requests. By default
SSL requests are disabled.
.. confval:: AS.SSL.certificate
Type: *string*
Defines the path to the SSL certificate to use.
.. confval:: AS.SSL.key
Type: *string*
Defines the path to the private SSL key to use. This key
is not shared with clients.
.. _AS.auth:
.. note::
**AS.auth.\***
*Parameters controlling the authentication system for data requests*
*based on user ID, IP addresses, access roles and access control lists.*
.. confval:: AS.auth.backend
Default: ``basic``
Type: *string*
The server provides an authentication plug\-in interface. An authentication plugin
implements access control checks. It is free where it gets the access information from e.g
from a local database\/file or a remote server. The option sets which authentication plugin
should be used for authentication. Don't forget to load the plugin in the plugin section.
The basic plugin is built\-in.
.. _AS.auth.basic:
.. note::
**AS.auth.basic.\***
*Basic authentication parameters. The configuration can*
*be reloaded without restarting the server. Use*
*"seiscomp reload caps`" to reload the*
*authentication parameters without a restart.*
.. confval:: AS.auth.basic.access-list
Default: ``@SYSTEMCONFIGDIR@/caps/access.cfg``
Type: *file*
Path to the access control list controlling access based on rules.
By default access is unrestricted. Allow rules are evaluated first.
AM.DENY \= 127.0.0.1
AM.ALLOW \= 127.0.0.1
This example rule set prohibits all AM network stations for localhost because
the DENY rule is evaluated after the ALLOW rule.
IP restrictions apply to the guest user only. In addition to IPs the access can
be also restricted by user or group. In the latter case
the \"%\" must be placed in front of the group name. Here an example:
AM.ALLOW \= %users
AM.R44F5.ALLOW \= sysop
Rules are evaluated on the basis of one another. This can lead to misunderstandings. Here an
example:
AM.ALLOW \= sysop
This rule will allow the AM network for sysop only. But
DENY \= %users
AM.ALLOW \= sysop
will allow the access to the AM network for all users except those are member of the group users.
.. _AS.auth.basic.users:
.. confval:: AS.auth.basic.users.shadow
Default: ``@SYSTEMCONFIGDIR@/caps/shadow.cfg``
Type: *file*
Location of the users authentication file. For each user one line
of the following format must exist:
username:encrypted_pwd
To encrypt the password mkpasswd can be used. It is recommended to
apply a strong algorithm such as sha\-256 or sha\-512. The command
u\=sysop pw\=`mkpasswd \-m sha\-512` \&\& echo \$u:\$pw
generates one line for user \"sysop\".
Add the line to the authentication file.
.. confval:: AS.auth.basic.users.passwd
Default: ``@SYSTEMCONFIGDIR@/caps/passwd.cfg``
Type: *file*
Location of the users access control file. Each
line starts with a user ID \(uid\) or a group ID \(gid\)
and a list of access properties in the form:
uid:prop1,prop2
or
%gid:prop1,prop2
\"%\" indicates a gid instead of a uid.
The properties grant access to certain CAPS
features. Supported access property values are:
read, write, admin.
.. confval:: AS.auth.basic.users.group
Default: ``@SYSTEMCONFIGDIR@/caps/group.cfg``
Type: *file*
Location of the optional group file. Each line maps a group id
to a list of users in format
gid:user1,user2,user3
.. _AS.plugins:
.. confval:: AS.plugins.port
Default: ``18003``
Type: *int*
Defines the server port to use for plugin connections.
.. confval:: AS.plugins.allow
Type: *list:string*
List of IPs which are allowed to access the plugin port.
By default access is unrestricted.
.. confval:: AS.plugins.deny
Type: *list:string*
List of IPs which are not allowed to access the plugin port.
By default access is unrestricted.
.. _AS.plugins.SSL:
.. confval:: AS.plugins.SSL.port
Type: *int*
Defines the SSL server port to use for plugin SSL connections.
The SSL port is disabled by default.
.. confval:: AS.plugins.SSL.certificate
Type: *string*
Defines the path to the SSL certificate to use.
.. confval:: AS.plugins.SSL.key
Type: *string*
Defines the path to the private SSL key to use. This key
is not shared with clients.
.. _AS.http:
.. note::
**AS.http.\***
*Web interface control parameters*
.. confval:: AS.http.port
Type: *int*
Defines the server port for HTTP connections. By default the Web interface is disabled.
Typical value: 18081
.. confval:: AS.http.allow
Type: *list:string*
List of IPs which are allowed to access the http\(s\) port.
By default access is unrestricted.
.. confval:: AS.http.deny
Type: *list:string*
List of IPs which are not allowed to access the http\(s\) port.
By default access is unrestricted.
.. confval:: AS.http.resolveProxyClient
Default: ``false``
Type: *boolean*
Sets if the X\-Forwarded\-For HTTP header is evaluated to
retrieve the real client IP address from a proxy server.
This is important if the web frontend is behind a proxy,
e.g. Apache. Since data access is configured per IP, the
real IP is required to grant access to requested channels.
Enabling this opens a possible security hole as clients
can then easily spoof their IP if the proxy does not
correctly maintain this header or if CAPS does not run
behind a proxy.
.. confval:: AS.http.disableBasicAuthorization
Default: ``false``
Type: *boolean*
Controls whether basic authorization is enabled or not.
In case you are running CAPS behind a proxy which already
configures basic authorization then enable this flag.
If basic authorization is disabled then the default
HTTP user should have access without a password.
.. confval:: AS.http.fdsnws
Type: *string*
Sets the optional relative FDSNWS path which is being
used by the CAPS frontend client. Do not append
\"fdsnws\/dataselect\/1\/query\" as this is done
automatically. Set it to \"\/\" if the CAPS
frontend is running with a relative path behind e.g.
Nginx.
.. _AS.http.SSL:
.. note::
**AS.http.SSL.\***
*Use https instead of http when setting the following parameters*
.. confval:: AS.http.SSL.port
Type: *int*
Defines the server port for HTTPS connections.
By default the SSL Web interface is disabled.
.. confval:: AS.http.SSL.certificate
Type: *string*
Defines the path to the SSL certificate to use.
.. confval:: AS.http.SSL.key
Type: *string*
Defines the path to the private SSL key to use. This
key is not shared with clients.
.. _AS.FDSNWS:
.. note::
**AS.FDSNWS.\***
*FDSNWS control parameters for dataselect. The FDSNWS service*
*is provided through the "AS.http.port".*
.. confval:: AS.FDSNWS.maxTimeWindow
Default: ``0``
Unit: *s*
Type: *int*
Maximum length of time window per request. A value
greater than zero limits the maximum request time window
including all data. 0 disables the limit.
.. confval:: AS.FDSNWS.maxRequests
Default: ``1000``
Type: *int*
Maximum number of requests per post. A value greater than
or equal to zero limits the number
of request lines per POST request.
.. _AS.WWS:
.. note::
**AS.WWS.\***
*Winston waveform server (WWS) control parameters. When set,*
*CAPS will also serve WWS.*
.. confval:: AS.WWS.port
Type: *int*
Server port for WWS connections. Please note that
inventory information \(see AS.inventory\) is required to
fully support WWS requests otherwise empty values for
the channel location and unit will be returned.
Default \(no value\): The WWS interface is disabled.
.. confval:: AS.WWS.maxTimeWindow
Default: ``90000``
Unit: *s*
Type: *int*
Maximum length of time window in seconds per request.
A value greater than zero limits the maximum request time window
including all data. 0 disables the limit.
.. confval:: AS.WWS.maxRequests
Default: ``100``
Type: *int*
A value greater than or equal to zero limits the number
of request lines per POST request.
.. confval:: AS.WWS.allow
Type: *list:string*
List of IPs which are allowed to access the WWS port.
By default access is unrestricted.
.. confval:: AS.WWS.deny
Type: *list:string*
List of IPs which are not allowed to access the WWS port.
By default access is unrestricted.
Command-Line Options
====================
:program:`caps [options]`
.. _Generic:
Generic
-------
.. option:: -h, --help
Show help message.
.. option:: -V, --version
Show version information.
.. option:: --config-file arg
Use alternative configuration file. When this option is
used the loading of all stages is disabled. Only the
given configuration file is parsed and used. To use
another name for the configuration create a symbolic
link of the application or copy it. Example:
scautopick \-> scautopick2.
.. option:: --plugins arg
Load given plugins.
.. option:: -D, --daemon
Run as daemon. This means the application will fork itself
and doesn't need to be started with \&.
.. _Verbosity:
Verbosity
---------
.. option:: --verbosity arg
Verbosity level [0..4]. 0:quiet, 1:error, 2:warning, 3:info,
4:debug.
.. option:: -v, --v
Increase verbosity level \(may be repeated, eg. \-vv\).
.. option:: -q, --quiet
Quiet mode: no logging output.
.. option:: --print-component arg
For each log entry print the component right after the
log level. By default the component output is enabled
for file output but disabled for console output.
.. option:: --component arg
Limit the logging to a certain component. This option can
be given more than once.
.. option:: -s, --syslog
Use syslog logging backend. The output usually goes to
\/var\/lib\/messages.
.. option:: -l, --lockfile arg
Path to lock file.
.. option:: --console arg
Send log output to stdout.
.. option:: --debug
Execute in debug mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 .
.. option:: --trace
Execute in trace mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 \-\-print\-component\=1
\-\-print\-context\=1 .
.. option:: --log-file arg
Use alternative log file.
.. _Server:
Server
------
.. option:: -p, --server-port int
Overrides configuration parameter :confval:`AS.port`.
.. option:: --server-ssl-port int
Overrides configuration parameter :confval:`AS.SSL.port`.
.. option:: -P, --plugin-port int
Overrides configuration parameter :confval:`AS.plugins.port`.
.. option:: --http-port int
Overrides configuration parameter :confval:`AS.http.port`.
.. option:: --read-only
Do not store any packets.
.. _Test:
Test
----
.. option:: --configtest
Run a configuration file syntax test. It parses the
configuration files and either reports Syntax Ok or detailed
information about the particular syntax error.
.. option:: --print-access
Print access information for one or more channels from a
given IP and a user with password, format: NET.STA.LOC.CHA,
e.g.,
IP check
caps \-\-print\-access GE.\*.\*.\* 127.0.0.1
IP and user:password check
caps \-\-print\-access GE.APE.\*.\* \-\-user gempa:gempa 127.0.0.1
The stream ID filter supports wildcards. Use option \-v to
enable the trace mode to get detailed information about the
rule evaluation.
.. option:: -u, --user
Server user and password. Format: user:password .

View File

@ -0,0 +1,588 @@
.. highlight:: rst
.. _caps2caps:
#########
caps2caps
#########
**caps2caps synchronizes CAPS servers in real-time**
Description
===========
*caps2caps* can connect two |appname| server instances to synchronize the data
in real time. When one server 1 fails and the
other one, server 2, continues to operate, the server 1 can back fill the data
as soon as it becomes alive again.
*caps2caps* can run on either side to pull the data from the other server or to
push the data to this server:
* For **pulling data** from a remote to a local server configure the input and the
output parameters with the remote and the local server, respectively.
* For **pushing data** from a local to a remote server configure the input and the
output parameters with the local and the remote server, respectively.
.. _fig-caps2caps:
.. figure:: media/caps2caps.png
:width: 18cm
:align: center
caps2caps instances connecting two |appname| servers pulling data from the
remote into the local server.
Examples
========
* Run caps2caps as daemon module.
#. Configure input and output hosts (:confval:`input.address`,
:confval:`output.address`) in caps2caps module configuration,
:file:`caps2caps.cfg`.
#. Enable and start caps2caps
.. code-block:: bash
seiscomp enable caps2caps
seiscomp start caps2caps
* Run caps2caps on demand in a terminal with specific, explicitly specifying
input and output hosts without encryption
.. code-block:: bash
caps2caps -I caps://inputServer:18002 -O caps://outputServer:18003
The same as above but with encrypted data transfer controlled by user name and
password
.. code-block:: bash
caps2caps -I capss://user:password@inputServer:18002 -O capss://user:password@inputServer:output:18003
* Pull or push data depending on module configuration but ignore the journal
file. This allows resending the data
.. code-block:: bash
caps2caps -j ""
Module Configuration
====================
| :file:`etc/defaults/global.cfg`
| :file:`etc/defaults/caps2caps.cfg`
| :file:`etc/global.cfg`
| :file:`etc/caps2caps.cfg`
| :file:`~/.seiscomp/global.cfg`
| :file:`~/.seiscomp/caps2caps.cfg`
caps2caps inherits :ref:`global options<global-configuration>`.
.. note::
Modules/plugins may require a license file. The default path to license
files is :file:`@DATADIR@/licenses/` which can be overridden by global
configuration of the parameter :confval:`gempa.licensePath`. Example: ::
gempa.licensePath = @CONFIGDIR@/licenses
.. confval:: streams
Type: *string*
Comma separated list of streams. Stream format: NET.STA.LOC.CHA.
Streams may contain wildcards
.. confval:: begin
Type: *string*
Start time of data time window, default is 'GMT'. Date time format:
[YYYY\-MM\-DD HH:MM:SS].
.. confval:: end
Type: *string*
End time of data time window. Date time format:
[YYYY\-MM\-DD HH:MM:SS].
.. confval:: maxDays
Default: ``-1``
Unit: *day*
Type: *int*
Maximum number of days to acquire regardless if the time window
is configured or read from journal. A value of 0 or less disables
the check.
.. confval:: days
Default: ``-1``
Unit: *day*
Type: *int*
Use to set the start time of data time window n days before the current time.
.. confval:: daysBefore
Default: ``-1``
Unit: *day*
Type: *int*
Use to set the end time of data time window n days before the current time.
.. confval:: timeWindowUpdateInterval
Default: ``-1``
Unit: *s*
Type: *int*
Sets the interval in seconds at which the relative request
time window defined by option days and\/or daysBefore is
updated.
Use a value less or equal zero to disable the update.
This feature is supported in archive mode only.
A typical use case is when data has to be transmitted
continuously with a time delay.
.. confval:: maxRealTimeGap
Default: ``-1``
Unit: *s*
Type: *int*
Sets the maximum real\-time data gap in seconds. This means,
if the start time of the requested time window of a channel
is before this value with respect to the current system time
then the request is split into a real\-time request starting
at system time \- marginRealTimeGap and a backfill request
from requested start time to time \- marginRealTimeGap.
That prioritizes real\-time data and backfills old data in
parallel.
.. confval:: marginRealTimeGap
Default: ``60``
Unit: *s*
Type: *int*
The time margin used to request real\-time data in combination
with maxRealTimeGap with respect to system time.
.. confval:: realtime
Default: ``true``
Type: *boolean*
Enable real\-time mode. Archived data is not fetched.
.. confval:: outOfOrder
Default: ``false``
Type: *boolean*
Enable out of order mode. Allows transfering data
which is not in timely order.
.. _input:
.. note::
**input.\***
*Configuration of data input host.*
.. confval:: input.address
Type: *string*
URL. Format: [[caps\|capss]:\/\/][user:pass\@]host[:port] .
.. _output:
.. note::
**output.\***
*Configuration of data output host.*
.. confval:: output.address
Default: ``localhost:18003``
Type: *string*
URL. Format: [[caps\|capss]:\/\/][user:pass\@]host[:port] .
.. confval:: output.bufferSize
Default: ``1048576``
Unit: *byte*
Type: *uint*
Size of the packet buffer.
.. confval:: output.backfillingBufferSize
Default: ``0``
Unit: *s*
Type: *uint*
Length of backfilling buffer which is a tool to mitigate
out\-of\-order data. Whenever a gap is detected, records
will be held in a buffer and not sent out. Records are
flushed from front to back if the buffer size is
exceeded. A value of 0 disables this feature.
.. confval:: output.mseed
Default: ``false``
Type: *boolean*
Enable Steim2 encoding for received RAW packets.
.. confval:: output.timeout
Default: ``60``
Unit: *s*
Type: *int*
Timeout when sending a packet. If the timeout expires,
the connection will be closed and re\-established.
.. _journal:
.. confval:: journal.file
Default: ``@ROOTDIR@/var/run/caps2caps/journal``
Type: *string*
File to store stream states.
.. confval:: journal.flush
Default: ``10``
Unit: *s*
Type: *uint*
Flush stream states to disk in the given interval.
.. confval:: journal.waitForAck
Default: ``60``
Unit: *s*
Type: *uint*
Wait when a sync has been forced, up to given seconds.
.. confval:: journal.waitForLastAck
Default: ``5``
Unit: *s*
Type: *uint*
Wait on shutdown to receive acknownledgement messages, up to the
given seconds.
Command-Line Options
====================
.. _Generic:
Generic
-------
.. option:: -h, --help
Show help message.
.. option:: -V, --version
Show version information.
.. option:: --config-file arg
Use alternative configuration file. When this option is
used the loading of all stages is disabled. Only the
given configuration file is parsed and used. To use
another name for the configuration create a symbolic
link of the application or copy it. Example:
scautopick \-> scautopick2.
.. option:: --plugins arg
Load given plugins.
.. option:: -D, --daemon
Run as daemon. This means the application will fork itself
and doesn't need to be started with \&.
.. _Verbosity:
Verbosity
---------
.. option:: --verbosity arg
Verbosity level [0..4]. 0:quiet, 1:error, 2:warning, 3:info,
4:debug.
.. option:: -v, --v
Increase verbosity level \(may be repeated, eg. \-vv\).
.. option:: -q, --quiet
Quiet mode: no logging output.
.. option:: --print-component arg
For each log entry print the component right after the
log level. By default the component output is enabled
for file output but disabled for console output.
.. option:: --component arg
Limit the logging to a certain component. This option can
be given more than once.
.. option:: -s, --syslog
Use syslog logging backend. The output usually goes to
\/var\/lib\/messages.
.. option:: -l, --lockfile arg
Path to lock file.
.. option:: --console arg
Send log output to stdout.
.. option:: --debug
Execute in debug mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 .
.. option:: --trace
Execute in trace mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 \-\-print\-component\=1
\-\-print\-context\=1 .
.. option:: --log-file arg
Use alternative log file.
.. _Input:
Input
-----
.. option:: -I, --input arg
Overrides configuration parameter :confval:`input.address`.
URL of data input host. Format:
[[caps\|capss]:\/\/][user:password\@]host[:port] .
.. option:: --max-real-time-gap
Maximum length of data gap after reconnecting. If exceeded,
a real\-time stream and backfilling stream will be created in
parallel. Setting this value will give highest priority to
real\-time streams, e.g., for rapid response systems.
.. _Streams:
Streams
-------
.. option:: -i, --inventory arg
Inventory XML defining the streams to add.
.. option:: -A, --add-stream arg
List of streamIDs [NET.STA.LOC.CHA] to add. Wildcards are
supported. Use comma\-separation without blanks for multiple
IDs.
.. option:: --begin arg
Start time of data request. Applied only on streams not
found in the journal. Format: 'YYYY\-MM\-DD hh:mm:ss.sss'.
.. option:: --end arg
End time of data request. Format: 'YYYY\-MM\-DD hh:mm:ss.sss'.
.. option:: --max-days arg
Unit: *day*
Maximum number of days to acquire regardless if the time
window is configured or read from journal. A value of 0 or
less disables the check.
.. option:: --days arg
Unit: *day*
Begin of data request time window given as days before current time.
Applied only on streams not found in the journal.
.. option:: --days-before arg
Unit: *day*
End of data request time window given as number of days
before current time.
.. _Mode:
Mode
----
.. option:: --archive
Disable real\-time mode. Only archived data is fetched and
missing records are ignored.
.. option:: --out-of-order
Use to enable out\-of\-order mode. Allows transfering data
which is not in timely order.
.. _Output:
Output
------
.. option:: -O, --output arg
Overrides configuration parameter :confval:`output.address`.
URL of data output host. Format:
[[caps\|capss]:\/\/][user:password\@]host[:port] .
.. option:: -b, --buffer-size arg
Size \(bytes\) of the journal buffer. If exceeded, a sync of
the journal is forced.
.. option:: --mseed
Enables Steim2 encoding for received RAW packets.
.. _Journal:
Journal
-------
.. option:: -j, --journal arg
Journal file to store stream states. Use an empty string to
ignore the journal file which will transfer the data
independent of previous transfers.
.. option:: --flush arg
Unit: *s*
Flush stream states to disk every given seconds.
.. option:: --waitForAck arg
Unit: *s*
Wait when a sync has been forced, up to the given seconds.
.. option:: -w, --waitForLastAck arg
Unit: *s*
Wait on shutdown to receive acknowledgment messages, up
to the given seconds.

View File

@ -0,0 +1,352 @@
.. highlight:: rst
.. _caps_plugin:
###########
caps_plugin
###########
**Transfers data from CAPS to SeedLink server**
Description
===========
CAPS! server plugin that receives raw data via the CAPS! protocol and
sends raw or compressed data to Seedlink or to standard out.
Configuration
=============
The caps_plugin can be configured as any other Seedlink plugins, e.g.
via `scconfig <https://docs.gempa.de/seiscomp3/current/apps/scconfig.html>`_.
The configuration is shown using the SC250 station of the SW network as an example.
To start `scconfig` run:
.. code-block:: sh
> seiscomp exec scconfig
Select 'Bindings' from the panel switch. The bindings panel shown in
figure :ref:`fig-scconfig-bindings-panel` configures a station for a module. It is separated into three main areas:
* the station tree (red + orange)
* the binding content (green)
* the module tree (blue)
.. _fig-scconfig-bindings-panel:
.. figure:: media/scconfig_bindings_panel.png
:width: 17cm
Bindings panel
Open the context menu of the view below the station tree and select
'Add network' to add a new network. Fill in the network name 'SW' into
the input form and press 'OK'. A double click on the network 'SW' shows
the associated stations. Add a new Station 'SC250' in the same way as
done before for the network.
Figure Figure :ref:`fig-scconfig-add-station` shows the current station tree.
.. _fig-scconfig-add-station:
.. figure:: media/scconfig_add_station.png
:width: 17cm
Station tree of the SW network
To complete the configuration open the station 'SC250' in the station
tree and use the context menu to add a new binding for Seedlink.
Go to the sources section of the binding content, select 'CAPS' and press
the button on left side of the selection box. Leave the upcoming input
form blank and press 'OK'. Subsequently click on the triangle besides
the CAPS label and set up the caps_plugin. Supported encodings are
'STEIM1' and 'STEIM2'. Use an empty encoding string to create raw
miniSEED packets.
Figure :ref:`fig-scconfig-binding-conf` shows an example configuration.
.. _fig-scconfig-binding-conf:
.. figure:: media/scconfig_binding_conf.png
:width: 17cm
CAPS Binding configuration
Press 'CTRL+S' to save all changes. Afterwards switch to the 'System panel',
select Seedlink in the module list and press 'Update configuration'.
Examples
========
The caps plugin can also be used as a command-line tool to request data.
The data will be sent to standard out.
Command-line help
-----------------
.. code-block:: sh
> seiscomp exec bash
> $SEISCOMP_ROOT/share/plugins/seedlink/caps_plugin -h
Data file request
-----------------
Submit the request to the CAPS server to download miniSEED data to a file,
e.g. data.mseed:
.. code-block:: sh
seiscomp exec bash
> $SEISCOMP_ROOT/share/plugins/seedlink/caps_plugin -I localhost:18002 -A SW.SC250..HH? \
--encoding STEIM2 caps2sl.localhost.18002.state \
--begin "2013-08-01 00:00:00" --end "2013-08-01 01:00:00" \
--dump > data.mseed
Submit the request based on the request file to the CAPS server to
ownload miniSEED data to a file, e.g. data.mseed:
.. code-block:: sh
seiscomp exec bash
> $SEISCOMP_ROOT/share/plugins/seedlink/caps_plugin -I localhost:18002 -f streams_list \
--encoding STEIM2 caps2sl.localhost.18002.state \
--begin "2013-08-01 00:00:00" --end "2013-08-01 01:00:00" \
--dump > data.mseed
Request file, e.g. streams_list:
.. code-block:: sh
SW.SC254..*
SW.SC250..HH?
SW.*..HHZ
Module Configuration
====================
| :file:`etc/defaults/global.cfg`
| :file:`etc/defaults/caps_plugin.cfg`
| :file:`etc/global.cfg`
| :file:`etc/caps_plugin.cfg`
| :file:`~/.seiscomp/global.cfg`
| :file:`~/.seiscomp/caps_plugin.cfg`
caps_plugin inherits :ref:`global options<global-configuration>`.
.. note::
Modules/plugins may require a license file. The default path to license
files is :file:`@DATADIR@/licenses/` which can be overridden by global
configuration of the parameter :confval:`gempa.licensePath`. Example: ::
gempa.licensePath = @CONFIGDIR@/licenses
.. confval:: journal
Type: *string*
File to store stream states. Use an empty string to log to standard out.
.. confval:: archive
Default: ``false``
Type: *boolean*
Disables realtime mode. Only archived data is fetched.
.. _input:
.. confval:: input.address
Default: ``localhost:18002``
Type: *string*
CAPS URL to fetch data from, format: [[caps\|capss]:\/\/][user:pass\@]host[:port]
Command-Line Options
====================
.. _Generic:
Generic
-------
.. option:: -h, --help
Show help message.
.. option:: -V, --version
Show version information.
.. option:: --config-file arg
Use alternative configuration file. When this option is
used the loading of all stages is disabled. Only the
given configuration file is parsed and used. To use
another name for the configuration create a symbolic
link of the application or copy it. Example:
scautopick \-> scautopick2.
.. option:: --plugins arg
Load given plugins.
.. option:: -D, --daemon
Run as daemon. This means the application will fork itself
and doesn't need to be started with \&.
.. _Verbosity:
Verbosity
---------
.. option:: --verbosity arg
Verbosity level [0..4]. 0:quiet, 1:error, 2:warning, 3:info,
4:debug.
.. option:: -v, --v
Increase verbosity level \(may be repeated, eg. \-vv\).
.. option:: -q, --quiet
Quiet mode: no logging output.
.. option:: --print-component arg
For each log entry print the component right after the
log level. By default the component output is enabled
for file output but disabled for console output.
.. option:: --component arg
Limit the logging to a certain component. This option can
be given more than once.
.. option:: -s, --syslog
Use syslog logging backend. The output usually goes to
\/var\/lib\/messages.
.. option:: -l, --lockfile arg
Path to lock file.
.. option:: --console arg
Send log output to stdout.
.. option:: --debug
Execute in debug mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 .
.. option:: --trace
Execute in trace mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 \-\-print\-component\=1
\-\-print\-context\=1 .
.. option:: --log-file arg
Use alternative log file.
.. _Plugin:
Plugin
------
.. option:: -I, --input arg
Overrides configuration parameter :confval:`input.address`.
.. _Streams:
Streams
-------
.. option:: -A, --add-stream arg
List of stream IDs [net.sta.loc.cha] to add, wildcards are supported
.. option:: -f, --stream-file arg
Path to stream\-file. The stream file may contain a list of streams IDs [net.sta.loc.cha]
.. option:: --begin arg
Request start time
.. option:: --end arg
Request end time
.. _Mode:
Mode
----
.. option:: --archive
Overrides configuration parameter :confval:`archive`.
.. _Output:
Output
------
.. option:: --dump arg
Dump all received data to stdout and don't push the data to SeedLink
.. option:: --encoding arg
Preferred data output encoding
.. option:: -I, --input arg
Data input host
.. option:: -j, --journal arg
Overrides configuration parameter :confval:`journal`.

View File

@ -0,0 +1,473 @@
.. highlight:: rst
.. _capssds:
#######
capssds
#######
**Virtual overlay file system presenting a CAPS archive directory as a
read-only SDS archive.**
Description
===========
:program:`capssds` is a virtual overlay file system presenting a CAPS archive
directory as a read-only :term:`SDS` archive with no extra disk space
requirement.
CAPS Directory and file names are mapped. An application reading from a file
will only see :term:`miniSEED` records ordered by record start time. You may
connect to the virtual SDS archive using the RecordStream SDS or directly read
the single :term:`miniSEED` file. Other seismological software such as ObsPy or
Seisan may read directly from the SDS archive of the files therein.
.. _sec-capssds-usage:
Usage
=====
The virtual file system may be mounted by an unprivileged system user like
`sysop` or configured by the `root` user to be automatically mounted on machine
startup via an `/etc/fstab` entry or an systemd mount script.
The following sections assume that the CAPS archive is located under
`/home/sysop/seiscomp/var/lib/caps/archive` and the SDS archive should appear
under `/tmp/sds` with all files and directories being owned by the
`sysop` user.
Regardless which of the following mount strategies is chosen make sure to
create the target directory first:
.. code-block:: sh
mkdir -p /tmp/sds
.. _sec-capssds-usage-unpriv:
Unpriviledged user
------------------
Mount the archive:
.. code-block:: sh
capssds ~/seiscomp/var/lib/caps/archive /tmp/sds
Unmount the archive:
.. code-block:: sh
fusermount -u /tmp/sds
.. _sec-capssds-usage-fstab:
System administrator - /etc/fstab
---------------------------------
Create the /etc/fstab entry:
.. code-block:: plaintext
/home/sysop/seiscomp/var/lib/caps/archive /tmp/sds fuse.capssds defaults 0 0
Alternatively you may define mount options, e.g., to deactivate the auto mount,
grant the user the option to mount the directory himself or use the sloppy_size
feature:
.. code-block:: plaintext
/home/sysop/seiscomp/var/lib/caps/archive /tmp/sds fuse.capssds fuse.capssds noauto,exact_size,user 0 0
Mount the archive:
.. code-block:: sh
mount /tmp/sds
Unmount the archive:
.. code-block:: sh
umount /tmp/sds
.. _sec-capssds-usage-systemd:
System administrator - systemd
------------------------------
Create the following file under `/etc/systemd/system/tmp-sds.mount`.
Please note that the file name must match the path specified under `Where` with
all slashes replaced by a dash:
.. code-block:: ini
[Unit]
Description=Mount CAPS archive as readonly miniSEED SDS
After=network.target
[Mount]
What=/home/sysop/var/lib/caps/archive
Where=/tmp/sds
Type=fuse.capssds
Options=defaults,allow_other
[Install]
WantedBy=multi-user.target
Mount the archive:
.. code-block:: sh
systemctl start tmp-sds.mount
Unmount the archive:
.. code-block:: sh
systemctl stop tmp-sds.mount
Automatic startup:
.. code-block:: sh
systemctl enable tmp-sds.mount
.. _sec-capssds-impl:
Implementation Details
======================
:program:`capssds` makes use of the FUSE :cite:p:`fuse` is a userspace
filesystem framework provided by the Linux kernel as well as the libfuse
:cite:p:`libfuse` user space library.
The file system provides only read access to the data files and implements only
:ref:`basic operations <sec-capssds-impl-ops>` required to list and read data files.
It has to fulfill 2 main tasks, the :ref:`sec-capssds-impl-pathmap`
of CAPS and SDS directory tree entries and the :ref:`sec-capssds-impl-conv`.
:ref:`Caches <sec-capssds-impl-perf>` are used the improve the performance.
.. _sec-capssds-impl-ops:
Supported operations
--------------------
* `init` - initializes the file system
* `getattr` - get file and directory attributes such as size and access rights
* `access` - check for specific access rights
* `open` - open a file
* `read` - read data at a specific file position
* `readdir` - list directory entries
* `release` - release a file handle
* `destroy` - shutdown the file system
Please refer to
`fuse.h <https://github.com/libfuse/libfuse/blob/master/include/fuse.h>`_
for a complete list of fuse operations.
.. _sec-capssds-impl-pathmap:
Path mapping
------------
CAPS uses a :ref:`comparable directory structure <sec-archive>` to SDS with
three differences:
* The channel does not use the `.D` prefix.
* The day of year index is zero-based (0-365) where as SDS uses an index
starting with 1 (1-366).
* CAPS data files use the extension `.data`.
The following example shows the translation from a CAPS data file path to an SDS
file path for the stream AM.R0F05.00.SHZ for data on January 1st 2025:
`2025/AM/R0F05/SHZ/AM.R0F05.00.SHZ.2025.000.data -> 2025/AM/R0F05/SHZ.D/AM.R0F05.00.SHZ.D.2025.001`
Directories and file names not fulfilling the :term:`miniSEED` format
specification are not listed.
.. _sec-capssds-impl-conv:
Data file conversion
--------------------
A :ref:`CAPS data file <sec-caps-archive-file-format>` contains records of
certain types in the order of their arrival together with a record index for
record lookup and sorting. If a process reads data, only :term:`miniSEED` records
contained in the CAPS data file are returned in order of the records start time
and not the order of arrival. Likewise only :term:`miniSEED` records are counted
for the reported file size unless the `-o sloppy-size` option is specified.
.. _sec-capssds-impl-perf:
Performance optimization
------------------------
When a file is opened all :term:`miniSEED` records are copied to a memory
buffer. This allows fast index based data access at the cost of main memory
consumption. The number or simultaneously opened data files can be configured
through the `-o cached_files` option and must match the available memory size.
If an application tries to open more files than available, the action will fail.
To obtain the mapped SDS file size the CAPS data file must be scanned for
`miniSEED` records. Although only the header data is read this is still an
expensive operation for hundreds of files. A file size cache is used containing
up to `-o cached_file_sizes` entries each consuming 56 bytes of memory. File
sizes recently accessed are pushed to the front of the cache. A cache item is
invalidated if the modification time of the CAPS data file is more recent than
the entry creation time.
If your use case does not require the listing of the exact file size, you may
use the `-o sloppy-size` option which will stop generating the :term:`miniSEED`
file size and will return the size of the CAPS file instead.
Command-Line Options
====================
:program:`capstool [options] [capsdir] mountpoint`
.. _File-system specific options:
File-system specific options
----------------------------
.. option:: -o caps_dir=DIR
Default: ``Current working directory``
Path to the CAPS archive directory.
.. option:: -o sloppy_size
Return the size of the CAPS data file instead of summing
up the size of all MSEED records. Although there is a
cache for the MSEED file size calculating the real size is
an expensive operation. If your use case does not depend
on the exact size you may activate this flag for speedup.
.. option:: -o cached_file_sizes=int
Default: ``100000``
Type: *int*
Number of file sizes to cache. Used when sloppy_size is
off to avoid unnecessary recomputation of MSEED sizes. A
cache entry is valid as long as neither the mtime nor
size of the CAPS data file changed. Each entry consumes
56 bytes of memory.
.. option:: -o cached_files=int
Default: ``100``
Type: *int*
Number of CAPS data files to cache \(100\). The file
handle for each cached file will be kept open to speed
up data access.
.. _FUSE Options:
FUSE Options
------------
.. option:: -h, --help
Print this help text.
.. option:: -V, --version
Print version.
.. option:: -d
Enable debug output \(implies \-f\).
.. option:: -o debug
Enable debug output \(implies \-f\).
.. option:: -f
Enable foreground operation.
.. option:: -s
Disable multi\-threaded operation.
.. option:: -o clone_fd
Use separate fuse device fd for each thread \(may improve performance\).
.. option:: -o max_idle_threads=int
Default: ``-1``
Type: *int*
The maximum number of idle worker threads allowed.
.. option:: -o max_threads=int
Default: ``10``
Type: *int*
The maximum number of worker threads allowed.
.. option:: -o kernel_cache
Cache files in kernel.
.. option:: -o [no]auto_cache
Enable caching based on modification times.
.. option:: -o no_rofd_flush
Disable flushing of read\-only fd on close.
.. option:: -o umask=M
Type: *octal*
Set file permissions.
.. option:: -o uid=N
Set file owner.
.. option:: -o gid=N
Set file group.
.. option:: -o entry_timeout=T
Default: ``1``
Unit: *s*
Type: *float*
Cache timeout for names.
.. option:: -o negative_timeout=T
Default: ``0``
Unit: *s*
Type: *float*
Cache timeout for deleted names.
.. option:: -o attr_timeout=T
Default: ``1``
Unit: *s*
Type: *float*
Cache timeout for attributes.
.. option:: -o ac_attr_timeout=T
Default: ``attr_timeout``
Unit: *s*
Type: *float*
Auto cache timeout for attributes.
.. option:: -o noforget
Never forget cached inodes.
.. option:: -o remember=T
Default: ``0``
Unit: *s*
Type: *float*
Remember cached inodes for T seconds.
.. option:: -o modules=M1[:M2...]
Names of modules to push onto filesystem stack.
.. option:: -o allow_other
Allow access by all users.
.. option:: -o allow_root
Allow access by root.
.. option:: -o auto_unmount
Auto unmount on process termination.
.. _Options for subdir module:
Options for subdir module
-------------------------
.. option:: -o subdir=DIR
Prepend this directory to all paths \(mandatory\).
.. option:: -o [no]rellinks
Transform absolute symlinks to relative.
.. _Options for iconv module:
Options for iconv module
------------------------
.. option:: -o from_code=CHARSET
Default: ``UTF-8``
Original encoding of file names.
.. option:: -o to_code=CHARSET
Default: ``UTF-8``
New encoding of the file names.

View File

@ -0,0 +1,322 @@
.. highlight:: rst
.. _capstool:
########
capstool
########
**CAPS command-line interface (CLI) client.**
Description
===========
capstool is a CAPS client application for retrieving data and listing available
streams from an operational CAPS server.
Applications
============
* Connectivity test to a CAPS server (:option:`-P`).
* Request of available streams (:option:`-Q`, :option:`-I`). The result set may
vary depending on the client's IP address or the user name used for the
connection.
* Data retrieval to stdout or individual files (:option:`-o`). Data may be
requested in order of sampling time or time of arrival (:option:`--ooo`). It may
also be retrieved downsampled to 1Hz (:option:`--heli`).
* Data quality control by listing gaps (:option:`-G`), continous data segments
(:option:`-S`) or record arrival times (:option:`-M`).
* Data cleanup on the server side (:option:`--purge`).
* Retrieval of server statistics (:option:`-X`).
Input
=====
The program reads requests from file or from standard input if no file is specified.
The request format is defined as follows:
.. code-block:: params
YYYY,MM,DD,HH,MM,SS YYYY,MM,DD,HH,MM,SS Network Station [Location] Channel
Each request line contains a start and an end time followed by a stream id. The
fields Network, Station, Channel and Location support wild cards (*). The latter
one is optional. For matching all locations please use the '*' symbol, if empty
it assumes that only empty locations are being requested.
.. note::
The request lines can be generated for a particular event using
:cite:t:`scevtstreams` as of the SeisComP3 release Jakarta-2018.xxx.
Example:
.. code-block:: params
2010,02,18,12,00,00 2010,02,18,12,10,00 GE WLF BH*
2010,02,18,12,00,00 2010,02,18,12,10,00 GE VSU 00 BH*
Output
======
The output format differs by record type. Below is an overview of the available
formats.
.. csv-table::
:header: "Record type", "Output data format"
:widths: 1,1
RAW, ASCII SLIST
MSEED, MSEED
ANY, Stored data format
.. note::
When retrieving miniSEED data the records are not necessarily sorted by time.
However, sorting by time is required, e.g., for processing in playbacks.
Use :cite:t:`scmssort` for sorting the records by time. Example:
.. code-block:: sh
scmssort -E -u data.mseed > data_sorted.mseed
Examples
========
* **List available streams:**
.. code-block:: sh
capstool -H localhost:18002 -Q
* **Secured connection:**
Connect via Secure Sockets Layer (SSL) and supply credentials for
authentication.
.. code-block:: sh
capstool -H localhost:18002 -s -c user:password -Q
* **Time-based request without request file:**
Request file to load miniSEED data for some GE stations:
.. code-block:: params
2010,02,18,12,00,00 2010,02,18,12,10,00 GE WLF BH*
2010,02,18,12,00,00 2010,02,18,12,10,00 GE VSU BH*
Submit the request in :file:`req.txt` to the CAPS server, and download miniSEED
data to the file :file:`data.mseed`.
.. code-block:: sh
capstool -H localhost:18002 -o data.mseed req.txt
* **Time-based request without request file:**
Request miniSEED data from a CAPS server. Provide request parameters from
standard input. Write the miniSEED data to standard output. Re-direct the
output and append it to a file, e.g., :file:`data.mseed`:
.. code-block:: sh
echo "2015,11,08,10,47,00 2015,11,08,11,00,00 * * BH?" |\
capstool -H localhost:18002 >> data.mseed
* **Event-based request:**
Request miniSEED data from a CAPS server for a particular event with ID <eventID>.
Provide the request file using :cite:t:`scevtstreams`. Write the miniSEED data
to standard output. Re-direct the output to a file, e.g., :file:`<eventID>.mseed`.
.. code-block:: sh
scevtstreams -d mysql://sysop:sysop@localhost/seiscomp -E <eventID> --caps > req.txt
capstool -H localhost:18002 req.txt > <eventID>.mseed
* **Video data:**
Request to load video data from Station HILO. Request file:
.. code-block:: params
2013,08,01,00,00,00 2013,08,01,00,30,00 VZ HILO WLS CAM
2013,08,01,00,00,00 2013,08,01,00,30,00 VZ HILO WLS CAM
Submit the request in :file:`req.txt` to the CAPS server, and download the video data
to files using the given pattern:
.. code-block:: sh
capstool -H localhost:18002 -o "%H%M%S.%f" req.txt
Command-Line Options
====================
:program:`capstool [options]`
.. _Options:
Options
-------
.. option:: -h, --help
Show a help message and exit.
.. option:: -H, --host HOST[:PORT]
Default: ``localhost:18002``
Host and optionally port of the CAPS server \(default is localhost:18002\).
.. option:: -s, --ssl
Use secure socket layer \(SSL\).
.. option:: -c, --credentials USER[:PASSWORD]
Authentication credentials. If the password is omitted, it is asked for on command\-line.
.. option:: -P, --ping
Retrieve server version information and exit.
.. option:: -Q
Print availability extents of all data streams.
.. option:: -I, --info-streams FILTER
Like \-Q but with a use a regular filter expression for the requested streams, e.g., AM.\*.
.. option:: --filter-list
Identical to \-I.
.. option:: --mtime [start]:[end]
Restrict request to record modification time window. Time format:
%Y,%m,%d[,%H[,%M[,%S[,%f]]]]
.. option:: -X, --info-server
Request server statistics in JSON format.
.. option:: --modified-after TIME
Limit server statistics request to data modified after specific time. Time format:
%Y,%m,%d[,%H[,%M[,%S[,%f]]]]
.. option:: --force
Disable any confirmation prompts.
.. _Options (request file, no data download):
Options (request file, no data download)
----------------------------------------
.. option:: -G, --print-gaps
Request list of data gaps.
.. option:: -S, --print-segments
Request list of continuous data segments.
.. option:: --tolerance SECONDS
Default: ``0``
Threshold in seconds defining a data gap \(decimal point, microsecond precision\).
.. option:: -R, --Resolution DAYS
Default: ``0``
The resolution in multiple of days of the returned data segments or gaps. A value of 0 returns
segments based on stored data records. A value larger than zero will return the minimum and
maximum data time of one, two or more days. Consecutive segments will be merged if end and start
time are within the tolerance.
.. option:: --print-stat
Request storage information with a granularity of one day.
.. option:: --purge
Deletes data from CAPS archive with a granularity of one
day. Any data file intersecting with the time window
will be purged. The user requires the purge permission.
.. _Options (request file and data download):
Options (request file and data download)
----------------------------------------
.. option:: -o, --output-file FILE
Output file for received data \(default: \-\). The file name is used as a prefix with the
extension added based on the record type \(MSEED, RAW, ANY, META, HELI\). Multiple files are
created if mixed data types are received. For 'ANY' records the file name may contain the
following format controls: %Y \- year, %j \- day of year, %H \- hour, %M \- minute, %S \- second, %F
\- format.
.. option:: --any-date-format FORMAT
Default: ``%Y%m%d_%H%M%S``
Date format to use for any files, see 'man strftime'.
.. option:: -t, --temp-file FILE
Use temporary file to store data. On success move to output\-file.
.. option:: --rt
Enable real time mode.
.. option:: --ooo
Request data in order of transmission time instead of sampling time.
.. option:: --out-of-order
Identical to \-\-ooo.
.. option:: -D, --heli
Request down\-sampled data \(1Hz\). The server will taper, bandpass filter and re\-sample the data.
.. option:: --itaper SECONDS
Timespan in SECONDS for the one\-sided cosine taper.
.. option:: --bandpass RANGE
Corner frequency RANGE of the bandpass filter, e.g., 1.0:4.0.
.. option:: -M, --meta
Request record meta data only.
.. option:: -v, --version VERSION
Request a specific format version. Currently only supported in meta requests.

View File

@ -0,0 +1,254 @@
.. highlight:: rst
.. _crex2caps:
#########
crex2caps
#########
**CREX CAPS plugin. Reads CREX data from file and pushes the data into the given CAPS server.**
Module Configuration
====================
| :file:`etc/defaults/global.cfg`
| :file:`etc/defaults/crex2caps.cfg`
| :file:`etc/global.cfg`
| :file:`etc/crex2caps.cfg`
| :file:`~/.seiscomp/global.cfg`
| :file:`~/.seiscomp/crex2caps.cfg`
crex2caps inherits :ref:`global options<global-configuration>`.
.. note::
Modules/plugins may require a license file. The default path to license
files is :file:`@DATADIR@/licenses/` which can be overridden by global
configuration of the parameter :confval:`gempa.licensePath`. Example: ::
gempa.licensePath = @CONFIGDIR@/licenses
.. _input:
.. confval:: input.readFrom
Type: *string*
Read input files from this file
.. confval:: input.directory
Type: *string*
Watch this directory for incoming input files
.. confval:: input.watchEvents
Default: ``close_write``
Type: *string*
Listen for specific inotify event\(s\). If ommitted, close_write events are listened for. Events:
access \- file or directory contents were read,
modify \- file or directory contents were written,
attrib \- file or directory attributes changed,
close_write \- file or directory closed, after being opened in writable mode,
close_nowrite \- file or directory closed, after being opened in read\-only mode
close \- file or directory closed, regardless of read\/write mode
open \- file or directory opened
moved_to \- file or directory moved to watched directory
moved_from \- file or directory moved from watched directory
move \- file or directory moved to or from watched directory
create \- file or directory created within watched directory
delete \- file or directory deleted within watched directory
delete_self \- file or directory was deleted
unmount \- file system containing file or directory unmounted
.. confval:: input.watchPattern
Type: *string*
Process any events whose filename matches the specified regular expression
.. _output:
.. confval:: output.host
Default: ``localhost``
Type: *string*
Data output host
.. confval:: output.port
Default: ``18003``
Type: *int*
Data output port
.. confval:: output.bufferSize
Default: ``1048576``
Type: *uint*
Size \(bytes\) of the packet buffer
.. _streams:
.. confval:: streams.file
Type: *string*
File to read streams from. Each line defines a mapping between a station and stream id. Line format is [ID NET.STA].
Command-Line Options
====================
.. _Generic:
Generic
-------
.. option:: -h, --help
Show help message.
.. option:: -V, --version
Show version information.
.. option:: --config-file arg
Use alternative configuration file. When this option is
used the loading of all stages is disabled. Only the
given configuration file is parsed and used. To use
another name for the configuration create a symbolic
link of the application or copy it. Example:
scautopick \-> scautopick2.
.. _Verbosity:
Verbosity
---------
.. option:: --verbosity arg
Verbosity level [0..4]. 0:quiet, 1:error, 2:warning, 3:info,
4:debug.
.. option:: -v, --v
Increase verbosity level \(may be repeated, eg. \-vv\).
.. option:: -q, --quiet
Quiet mode: no logging output.
.. option:: --print-component arg
For each log entry print the component right after the
log level. By default the component output is enabled
for file output but disabled for console output.
.. option:: --component arg
Limit the logging to a certain component. This option can
be given more than once.
.. option:: -s, --syslog
Use syslog logging backend. The output usually goes to
\/var\/lib\/messages.
.. option:: -l, --lockfile arg
Path to lock file.
.. option:: --console arg
Send log output to stdout.
.. option:: --debug
Execute in debug mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 .
.. option:: --trace
Execute in trace mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 \-\-print\-component\=1
\-\-print\-context\=1 .
.. option:: --log-file arg
Use alternative log file.
.. _Input:
Input
-----
.. option:: --station arg
Sets the station and sampling interval to use. Format is [net.sta\@?]
.. option:: -f, --file arg
Load CREX data directly from file
.. option:: --read-from arg
Read input files from this file
.. _Output:
Output
------
.. option:: -H, --host arg
Data output host
.. option:: -p, --port arg
Data output port
.. _Streams:
Streams
-------
.. option:: --streams-file arg
File to read streams from. Each line defines a mapping between a station and stream id. Line format is [ID NET.STA].

View File

@ -0,0 +1,133 @@
.. highlight:: rst
.. _data2caps:
#########
data2caps
#########
**Send data in easy-to-change formats to CAPS.**
Description
===========
*data2caps* reads data from file and send them in :ref:`RAW format <sec-pt-raw>`
to a CAPS server. The list of supported file formats can be easily extended
allowing to import almost any custom data file containing time series. The data
samples are converted to integer values. A multiplier can be applied to reach
the desired precision. The multiplier can be passed by the command-line option
:option:`--multiplier`. During data processing the multiplier must be considered.
The best way to do so is to correct the gain in the :term:`inventory` by the
multiplier.
Supported file formats which can be given along with :option:`--format`:
* slist
.. code-block:: properties
TIMESERIES AM_ABCDE_00_SHZ_R, 8226 samples, 50 sps, 2020-01-01T10:20:03.862000, SLIST, FLOAT, M/S
0.000134157
0.000286938
...
data2caps assumes files with exactly one block of data starting, e.g., with
*TIMESERIES*. Files containing multiple blocks must be split into multiple
one-block files before processing these files individually with data2caps. For
splitting you may use external programs, e.g., csplit.
Example for processing one file, *vz.data.raw*, containing multiple blocks:
.. code-block:: sh
csplit -z vz.data.raw /TIMESERIES/ '{*}'
for i in xx*; do data2caps -i $i -f flist; done
* unavco
The format supports tilt and pressure data on the data website of :cite:t:`unavco`
in the versions
* version 1.0: Requires to set the network code using :option:`--network` since the
it is not provided within the data files.
* version 1.1
.. note::
* The versions 1.0 and 1.1 are automatically recognized.
* If no multiplier is speficied by :option:`--multiplier`, unit conversion is
applied to the data for maintaining high resolution in :term:`miniSEED` format:
* hPa : Pa
* microradians : nRad
If no input file is given, :program:`data2caps` creates a generic data series and sends
it to the CAPS server.
.. warning::
The CAPS server to which data2caps should send data to must be up and running.
Examples
========
* Send data from a file in slist format to a CAPS server on *localhost:18003*:
.. code-block:: sh
data2caps -H localhost:18003 -i AM.ABCDE.00.SHZ-acc.slist -f slist
* Send tilt data from a file in unavco 1.1 format to a CAPS server on *localhost:18003*.
The data is automatically converted from myRad (microradiant) to nRad (nanoradiant):
.. code-block:: sh
data2caps -H localhost:18003 -i B2012327816TiltYuRad.txt -f unavco
Command-Line Options
====================
:program:`data2caps [options]`
.. _Options:
Options
-------
.. option:: -H, --host arg
Default: ``localhost``
Data output host. Format: host:port. Port 18003 is assumed
if not given explicitly. Default: localhost:18003.
.. option:: -h, --help
Print help.
.. option:: -i, --input file
Name of input data file.
.. option:: -f, --format arg
Values: ``slist,unavco``
Format of input data file. Supported: slist, unavco.
.. option:: -m, --multiplier arg
Multiplier applied to data samples for generating integers.
.. option:: -n, --network arg
Network code to be used for the data. Required for format unavco in
version 1.0 since this format does not provide a network code.

View File

@ -0,0 +1,210 @@
.. highlight:: rst
.. _gdi2caps:
########
gdi2caps
########
**CAPS import module for Guralp GDI server.**
Description
===========
The Güralp Data Interconnect plugin(GDI) requests data from one or multiple GDI
servers and sends it to a CAPS server. The communication between a GDI server
and the plugin itself is based on the GDI client library whereas outgoing
packets are send through the CAPS client library. Depending on the configuration
outgoing packets are converted on-the-fly into MSEED by the CAPS client library.
The plugin supports the following GDI sample formats:
* INT32
* INT16
* IEEE32FLOAT
* TEXT
Backfilling
===========
By default backfilling of unordered packets is enabled and the buffer size is
set to 180 seconds. With backfilling enabled The CAPS client library ensures
that all packets within this time window are send in order to the remote CAPS
server. The buffer size can be changed in the plugin configuration. A value of
zero disables the backfilling.
Connection handling
===================
CAPS Connection
---------------
All packets forwarded to the CAPS client library are stored in a local packet
buffer and are removed when they have been acknowledged by the remote CAPS
server. If a packet could not be send the plugin closes the connection and
tries to reconnect in a certain interval. If the packet buffer is exceeded the
packet is droped and the client library returns with an error.
GDI Connection
--------------
For each configured GDI connection the plugin opens a connection to the remote
GDI server. The plugin requests real time data only, the retrievel of "historic"
data is not supported yet. Every second the plugin checks the connection state.
If the state is GDI_State_Out_Of_Sync it closes the connection and tries to
reconnect in a certain interval.
Module Configuration
====================
.. note::
* gdi2caps is a standalone module and does not inherit
:ref:`global options <global-configuration>`.
* Modules/plugins may require a license file. The default path to license
files is :file:`@DATADIR@/licenses/` which can be overridden by module
configuration of the parameter :confval:`gempa.licensePath`. Example: ::
gempa.licensePath = @CONFIGDIR@/licenses
| :file:`etc/defaults/gdi2caps.cfg`
| :file:`etc/gdi2caps.cfg`
| :file:`~/.seiscomp/gdi2caps.cfg`
.. confval:: mapping
Type: *strings*
List of station name mappings separated by comma. Each list
entry has the format [name]:[alias]
.. confval:: mseed
Default: ``false``
Type: *boolean*
Enable MSEED encoding.
.. confval:: selectors
Type: *list:string*
Format: [loc.cha, ...]. Wildcards are supported.
.. _caps:
.. confval:: caps.address
Type: *string*
CAPS server address. Format is [address[:port]].
.. confval:: caps.backFillingBufferSize
Default: ``180``
Unit: *s*
Type: *int*
Length of backfilling buffer. Whenever a hole is detected, records
will be held in a buffer and not sent out. Records are flushed from
front to back if the buffer size is exceeded.
.. _profiles:
.. _profiles.$name:
.. note::
**profiles.$name.\***
$name is a placeholder for the name to be used.
.. confval:: profiles.$name.source
Type: *string*
GDI server address in format [host]:[port]. If port
is omitted, 1565 is assumed.
.. confval:: profiles.$name.identifier
Type: *string*
GDI connection identifying name. If two connections
using the same name the first connection will be
closed from the server.
If omitted, the hostname is used.
.. confval:: profiles.$name.mapping
Type: *strings*
List of station name mappings separated by comma. Each
entry has the format [name]:[alias]
.. confval:: profiles.$name.selectors
Type: *list:string*
List of selectors separated by comma. Each entry
has the format [loc.cha]. Wildcards are supported.
Bindings Parameters
===================
.. confval:: address
Type: *string*
GDI server address in format [host]:[port]. If port
is omitted, 1565 is assumed.
.. confval:: identifier
Type: *string*
GDI connection identifying name. If two connections
using the same name the first connection will be
closed from the server.
If omitted, the hostname is used.
.. confval:: mapping
Type: *strings*
List of station name mappings separated by comma. Each list
entry has the format [name]:[alias]
.. confval:: selectors
Type: *list:string*
List of selectors separated by comma. Each entry
has the format [loc.cha]. Wildcards are supported.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,551 @@
.. highlight:: rst
.. _ngl2caps:
########
ngl2caps
########
**NGL CAPS plugin. Reads GNSS data in kenv format and sends it to CAPS.**
Description
===========
Read GNSS data in kenv format, convert to RAW and send to a CAPS server.
Waveform Data
=============
Format describtions
-------------------
* **kenv Format**:
Final 5 min rapid solutions from stations [stat]:
http://geodesy.unr.edu/gps_timeseries/rapids_5min/kenv/[stat]/
* Sample rate: 1/300 sps
* Example: http://geodesy.unr.edu/gps_timeseries/rapids_5min/kenv/0ALM/
* Format (http://geodesy.unr.edu/gps_timeseries/README_kenv.txt):
.. code-block:: properties
----------------------------------------
.kenv format (east,north,up time series)
----------------------------------------
Note: first line of .kenv contains header fields to help human interpretation.
Column Header Example Description
------ ----------- --------- ------------------------------------
1 site JPLM 4-character station ID
2 sec-J2000 3705278100 GPS seconds since 2000-01-01 12:00:00
3 __MJD 55833 Modified Julian Day for GPS day
4 year 2011 Year
5 mm 9 Month
6 dd 29 Day of month
7 doy 272 Day of year
8 s-day 10800 Seconds of the GPS day
9 ___e-ref(m) -0.202261 East from reference longitude in llh
10 ___n-ref(m) 0.079096 North from reference latitude in llh
11 ___v-ref(m) -0.025883 Up from reference height in llh
12 _e-mean(m) -0.015904 East from daily mean position
13 _n-mean(m) -0.000944 North from daily mean position
14 _v-mean(m) 0.000232 Up from daily mean position
15 sig_e(m) 0.005700 Sigma east
16 sig_n(m) 0.006875 Sigma north
17 sig_v(m) 0.021739 Sigma up
* RINEX data web server: http://geodesy.unr.edu/magnet/rinex/
Data sources
------------
kenv data are provided by Nevada Geodetic Laboratory, NGL
(http://geodesy.unr.edu/):
* **Preferred:** Rapid data, 24 hours latency, 5 minutes sps, 1 ZIP file per year:
http://geodesy.unr.edu/NGLStationPages/RapidStationList
Archives:
http://geodesy.unr.edu/gps_timeseries/kenv/
Example for one station, 1 year:
http://geodesy.unr.edu/gps_timeseries/kenv/0ABI/0ABI.2022.kenv.zip
* Ultra-rapid data, 1 hour latency, 5 minutes sps (may have many gaps and outages):
http://geodesy.unr.edu/NGLStationPages/UltraStationList
Hourly upload:
http://geodesy.unr.edu/gps_timeseries/ultracombo/kenv/2022/141/
Archives:
http://geodesy.unr.edu/gps_timeseries/kenv/
* Final 24 h solutions from stations [stat]:
http://geodesy.unr.edu/gps_timeseries/txyz/IGS14/[stat].txyz2
Example: http://geodesy.unr.edu/gps_timeseries/txyz/IGS14/ARIS.txyz2
Fetch data
----------
.. note::
Data on the NGL server are zipped files, one file per year.
Geoffrey Blewitt why this is:
"We zip into yearly files which are updated every week. The actual day it is
updated varies depending on when other necessary inputs are ready, such as
JPL orbit files, and weather model files from TU Vienna, and ocean loading
files from Chalmers, Sweden. So, you would not want to check every day, but
I would say every Wednesday would typically work.
Its also important to note that while we process Final data every week,
often we include newly discovered or late data that can go as far back as
1994. So, we are not just incrementing with a new week of files.
On the technical side, we zip the data into yearly files to reduce the number
of “inodes” (files) on our server, which can slow things down when attempting
to seek files, and in the past has overloaded the server with maximum number
of inodes.
Zipping files has speeded up our operations considerably and allows us to
process all the worlds GPS geodetic data (currently > 18,000 stations).
It also allows for efficient backup, which can take far too long with
individual files."
For fetching data from the server above adjust:
* *year* year to consider as set in the file name. Use "" for all.
* *url*
* *outPut*: target directory of fetched files
in the Python script provided with the source code:
:file:`ngl/plugin/tools/fetchFilesFromWeb.py` and use it:
.. code-block:: sh
python fetchFilesFromWeb.py
Change to the target directory and unpack the files:
.. code-block:: sh
cd [target]
unzip *
gunzip *
Running ngl2caps converts data to RAW and sends them to the CAPS server. The
assumed physical unit of the output data is **nanometer** and the original data
are assumed in units of **meter**. The conversion is applied by the :confval:`gain`
which is therefore **1000,000,000** by default.
**Make sure to generate the inventory with the correct gain and gainUnit.**
If data are stored in "nm" and gainUnit of the stream in the inventory is "m" (SI unit),
the gain in the inventory must be :math:`10^{-09}`. Example:
.. code-block:: sh
ngl2caps --debug -p [target directory] --gain 1000000000
Output streams
--------------
The plugin writes the following parameter to the header of the output data.
.. csv-table::
:widths: 2 2 1 5
:align: left
:delim: ;
:header: group, parameter, added, value/remark
network; code; x; from input parameter
network; start time; x; from input parameter
station; code; x; from input file
station; start time; x; same as network
station; coordinates; x; from input file
sensor location; code; x; from input parameter
sensor location; start; x; same as network
sensor location; elevation; x; same as station
stream; code; x; from band + [XYZ][ZNE]
; ; ; from band + [X][ZNE] : derived data, daily mean removed
; ; ; from band + [Y][ZNE] : raw data
; ; ; from band + [Z][ZNE] : sigma data
Inventory
=========
* Access to different stations lists and formats: http://geodesy.unr.edu/PlugNPlayPortal.php
* Station list (HTML): http://geodesy.unr.edu/NGLStationPages/GlobalStationList
* Station list (TXT, 1/300 sps, 24 hour latency):
http://geodesy.unr.edu/NGLStationPages/DataHoldingsRapid5min.txt
* Details about stations [stat]: http://geodesy.unr.edu/NGLStationPages/stations/[stat].sta
* Station table with coordinates:
http://plugandplay.unavco.org:8080/unrgsac/gsacapi/site/search#tabId3153-1
:ref:`table2inv` can be used for conversion of a station table to :term:`SCML`.
Module Configuration
====================
| :file:`etc/defaults/global.cfg`
| :file:`etc/defaults/ngl2caps.cfg`
| :file:`etc/global.cfg`
| :file:`etc/ngl2caps.cfg`
| :file:`~/.seiscomp/global.cfg`
| :file:`~/.seiscomp/ngl2caps.cfg`
ngl2caps inherits :ref:`global options<global-configuration>`.
.. note::
Modules/plugins may require a license file. The default path to license
files is :file:`@DATADIR@/licenses/` which can be overridden by global
configuration of the parameter :confval:`gempa.licensePath`. Example: ::
gempa.licensePath = @CONFIGDIR@/licenses
.. _input:
.. note::
**input.\***
*Parameters controlling the input of event information and*
*reception of GNSS data.*
.. confval:: input.readFrom
Type: *string*
Read input data from this file.
.. confval:: input.directory
Type: *string*
Watch this directory for incoming input files.
.. confval:: input.watchEvents
Default: ``close_write``
Type: *string*
Listen for specific inotify event\(s\).
If ommitted, close_write events are listened for. Events:
access \- file or directory contents were read,
modify \- file or directory contents were written,
attrib \- file or directory attributes changed,
close_write \- file or directory closed, after being opened in writable mode,
close_nowrite \- file or directory closed, after being opened in read\-only mode,
close \- file or directory closed, regardless of read\/write mode,
open \- file or directory opened,
moved_to \- file or directory moved to watched directory,
moved_from \- file or directory moved from watched directory,
move \- file or directory moved to or from watched directory,
create \- file or directory created within watched directory,
delete \- file or directory deleted within watched directory,
delete_self \- file or directory was deleted,
unmount \- file system containing file or directory unmounted.
.. confval:: input.watchPattern
Type: *string*
Process any events whose filename matches the specified regular expression
.. confval:: input.leapSecondsFile
Default: ``@DATADIR@/caps/plugins/ngl2caps/leapseconds.txt``
Type: *string*
Name of file with leap seconds.
.. _streams:
.. note::
**streams.\***
*Parameters controlling the processing of received data.*
.. confval:: streams.networkCode
Default: ``NG``
Type: *string*
Network code to use.
.. confval:: streams.locationCode
Type: *string*
Location code to use.
.. confval:: streams.bandCode
Default: ``U``
Type: *string*
Band code of streams to use. Streams will be formed as
[band][sensor type][component].
.. _output:
.. note::
**output.\***
*Parameters controlling the output of received data.*
.. confval:: output.host
Default: ``localhost``
Type: *string*
Data output host.
.. confval:: output.port
Default: ``18003``
Type: *int*
Data output port.
.. confval:: output.bufferSize
Default: ``1048576``
Type: *uint*
Size \(bytes\) of the packet buffer.
.. confval:: output.gain
Default: ``1000000``
Type: *float*
Apply given gain to samples.
Command-Line Options
====================
.. _Generic:
Generic
-------
.. option:: -h, --help
Show help message.
.. option:: -V, --version
Show version information.
.. option:: --config-file arg
Use alternative configuration file. When this option is
used the loading of all stages is disabled. Only the
given configuration file is parsed and used. To use
another name for the configuration create a symbolic
link of the application or copy it. Example:
scautopick \-> scautopick2.
.. _Verbosity:
Verbosity
---------
.. option:: --verbosity arg
Verbosity level [0..4]. 0:quiet, 1:error, 2:warning, 3:info,
4:debug.
.. option:: -v, --v
Increase verbosity level \(may be repeated, eg. \-vv\).
.. option:: -q, --quiet
Quiet mode: no logging output.
.. option:: --print-component arg
For each log entry print the component right after the
log level. By default the component output is enabled
for file output but disabled for console output.
.. option:: --component arg
Limit the logging to a certain component. This option can
be given more than once.
.. option:: -s, --syslog
Use syslog logging backend. The output usually goes to
\/var\/lib\/messages.
.. option:: -l, --lockfile arg
Path to lock file.
.. option:: --console arg
Send log output to stdout.
.. option:: --debug
Execute in debug mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 .
.. option:: --trace
Execute in trace mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 \-\-print\-component\=1
\-\-print\-context\=1 .
.. option:: --log-file arg
Use alternative log file.
.. _Input event:
Input event
-----------
.. option:: -d, --directory arg
Watch this directory for incoming event input files. By
default the current directory is watched.
.. option:: --watch-pattern arg
Process any event file which file name matches the specified
regular expression.
.. _Input waveforms:
Input waveforms
---------------
.. option:: -f, --file arg
Read kenv data directly from this given file.
.. option:: -p, --read-from arg
Read all kenv files from this directory path. Only
considered if file is not given.
.. option:: -l, --leap-file arg
Path to leap seconds file.
.. _Filter:
Filter
------
.. _Streams:
Streams
-------
.. option:: --network-code arg
Network code to use.
.. option:: --location-code arg
Sensor location code to use.
.. option:: --location-code arg
Band code to use. Streams will be formed as [band code][xyz][ZNE].
.. _Output:
Output
------
.. option:: -H, --host arg
Data output host.
.. option:: -P, --port arg
Data output port.
.. option:: -g, --gain arg
Gain value multiplied to the data for unit conversion to nm.
.. option:: -u, --gain-unit arg
Gain unit to write to data file.

View File

@ -0,0 +1,60 @@
.. highlight:: rst
.. _orb2caps:
########
orb2caps
########
**Provides miniSEED data from an Antelope ORB. Operates on the Antelope system.**
Description
===========
The orb2caps plugin is an application independent of SeisComP transferring real
time data from an Antelope ORB to a :ref:`CAPS server <sec-caps-server>`. The
plugin is configured and it runs on the same machine as the Antelope system. All
data is encoded as :term:`miniSEED` Steim2 on the server side prior to transfer.
Setup
=====
#. Copy the :program:`orb2caps` binary to Antelope bin folder of the respective
Antelope version. Example:
.. code-block:: sh
cp orb2caps /opt/antelope/[version]/bin/
#. Edit :file:`rtexec.pf` in the rtsystem folder
* add entry in process table:
.. code-block:: properties
Processes &Tbl{
...
# <name> <executable including parameters>
orb2caps orb2caps -m 'OM.*/MGENC/(ACC|M100|MBB)' localhost:6510 192.168.20.130:18003
}
* add entry in run list for automatic startup:
.. code-block:: properties
Run &Arr{
...
# name of registered process
orb2caps
}
#. Check if the plugin is running:
.. code-block:: sh
ps aux | grep orb2caps

View File

@ -0,0 +1,101 @@
.. highlight:: rst
.. _rifftool:
########
rifftool
########
**CAPS data file analysis tool**
Description
===========
The CAPS server uses the RIFF file format for data archiving. :program:`rifftool`
may be used to analyse the RIFF container, check data integrity, print record
statistics and meta data, and to extract raw data stored in the files. The
output depends on the selected operational mode and is written to stdout.
:program:`rifftool` addresses files directly without a request to the
:ref:`CAPS server <sec-caps-server>`. This is in contrast to :ref:`capstool`
which makes server requests. Hence, rifftool can be used to extract data
files to, e.g., in miniSEED format, even if the CAPS server is not operational.
.. csv-table::
:header: "Mode"; "Description"
:widths: 10,90
:delim: ;
chunks; Dump individual RIFF chunks including size, position and meta information specific to individual chunk type.
index; Dump the CAPS record index organized in a B+ tree (BPT).
data; Dump the raw data stored in the CAPS records. The format is specific to the record type and different record types may be mixed in one file. E.g., if all records are of type miniSEED then the result will be a miniSEED conform file with the records sorted by sampling time.
check; Check data integrity by validating the record order. Additional check assertions may be enabled through parameters.
records; Dump meta data of all records.
gaps; Dump data gaps found in a file.
overlaps; Dump data overlaps found in a file.
Examples
========
* Dump list of record meta data to stdout
.. code-block:: sh
rifftool records NET.STA.LOC.CHA.2022.000.data
* Write the raw data stored in the CAPS records to new file
.. code-block:: sh
rifftool data NET.STA.LOC.CHA.2022.000.data > data.mseed
* Print gaps to stdout
.. code-block:: sh
rifftool gaps NET.STA.LOC.CHA.2022.000.data
* Check data integrity by validating the record order
.. code-block:: sh
rifftool check NET.STA.LOC.CHA.2022.000.data
Command-Line Options
====================
:program:`rifftool [options] mode file`
Mode is one of: check, chunks, data, index, gaps, overlaps, records
.. _Check Assertions:
Check Assertions
----------------
.. option:: --no-gaps
Assert data file contains no gaps.
.. option:: --no-overlaps
Assert no overlaps in data file.
.. option:: --no-data-type-change
Assert no data type changes among records of same data file.
.. option:: --no-sampling-rate-change
Assert no sampling rate change among records of same data file.

View File

@ -0,0 +1,601 @@
.. highlight:: rst
.. _rs2caps:
#######
rs2caps
#######
**Recordstream data acquisition plugin**
Description
===========
*rs2caps* uses the |scname| :cite:t:`recordstream` to feed data into :ref:`CAPS`
of to stdout.
Examples
========
* Inject data into the CAPS server:
.. code-block:: sh
seiscomp start caps
rs2caps -I data.mseed --passthrough
Read the :ref:`examples section<sec-caps_example_offline>` to learn how to
use *rs2caps* to input data into the CAPS server.
* Read data from the file *data.mseed* resample to 10 Hz sample rate by the
RecordStream and write the resulting data to stdout:
.. code-block:: bash
rs2caps -I resample://file?rate=10/data.mseed --passthrough --dump-packets --mseed > test.mseed
You may join the command with :cite:t:`capstool` and :cite:t:`scmssort`:
.. code-block:: bash
echo "2024,01,01,00,00,00 2024,01,01,00,10,00 * * * *" | capstool -H localhost |\
rs2caps -I resample://file?rate=10/- --passthrough --dump-packets --mseed |\
scmssort -E > test.mseed
.. note::
A similar action with additional data processing may be executed using
:ref:`sproc2caps`.
Module Configuration
====================
| :file:`etc/defaults/global.cfg`
| :file:`etc/defaults/rs2caps.cfg`
| :file:`etc/global.cfg`
| :file:`etc/rs2caps.cfg`
| :file:`~/.seiscomp/global.cfg`
| :file:`~/.seiscomp/rs2caps.cfg`
rs2caps inherits :ref:`global options<global-configuration>`.
.. note::
Modules/plugins may require a license file. The default path to license
files is :file:`@DATADIR@/licenses/` which can be overridden by global
configuration of the parameter :confval:`gempa.licensePath`. Example: ::
gempa.licensePath = @CONFIGDIR@/licenses
.. _output:
.. confval:: output.address
Default: ``localhost:18003``
Type: *string*
Data output URL [[caps\|capss]:\/\/][user:pass\@]host[:port]. This parameter
superseds the host and port parameter of previous versions and takes precedence.
.. confval:: output.host
Default: ``localhost``
Type: *string*
Data output host. Deprecated: Use output.address instead.
.. confval:: output.port
Default: ``18003``
Type: *int*
Data output port. Deprecated: Use output.address instead.
.. confval:: output.timeout
Default: ``60``
Unit: *s*
Type: *int*
Timeout when sending a packet. If the timeout expires
the connection will be closed and re\-established.
.. confval:: output.maxFutureEndTime
Default: ``120``
Unit: *s*
Type: *int*
Maximum allowed relative end time for packets. If the packet
end time is greater than the current time plus this value,
the packet will be discarded. By default this value is set
to 120 seconds.
.. confval:: output.bufferSize
Default: ``1048576``
Unit: *bytes*
Type: *uint*
Size \(bytes\) of the packet buffer.
.. confval:: output.backFillingBufferSize
Default: ``0``
Unit: *s*
Type: *int*
Length of backfilling buffer. Whenever a gap is detected, records
will be held in a buffer and not sent out. Records are flushed from
front to back if the buffer size is exceeded.
.. _output.mseed:
.. confval:: output.mseed.enable
Default: ``false``
Type: *boolean*
Enable on\-the\-fly miniSEED
encoding. If the encoder does not support the input
type of a packet it will be forwarded. Re\-encoding of
miniSEED packets is not supported.
.. confval:: output.mseed.encoding
Default: ``Steim2``
Type: *string*
miniSEED encoding to use. \(Uncompressed, Steim1 or Steim2\)
.. _streams:
.. confval:: streams.begin
Type: *string*
Start time of data time window, default 'GMT'
.. confval:: streams.end
Type: *string*
End time of data time window
.. confval:: streams.days
Default: ``-1``
Unit: *day*
Type: *int*
Use to set the start time of data time window n days before the current time.
.. confval:: streams.daysBefore
Default: ``-1``
Unit: *day*
Type: *int*
Use to set the end time of data time window n days before the current time.
.. confval:: streams.passthrough
Default: ``false``
Type: *boolean*
Do not subscribe to any stream and
accept everything a record source is
passing. This is useful in combination
with files.
.. _journal:
.. confval:: journal.file
Default: ``@ROOTDIR@/var/run/rs2caps/journal``
Type: *string*
File to store stream states
.. confval:: journal.flush
Default: ``10``
Unit: *s*
Type: *uint*
Flush stream states to disk every n seconds
.. confval:: journal.waitForAck
Default: ``60``
Unit: *s*
Type: *uint*
Wait when a sync has been forced, up to n seconds
.. confval:: journal.waitForLastAck
Default: ``5``
Unit: *s*
Type: *uint*
Wait on shutdown to receive acknownledgement messages, up to n seconds
.. confval:: journal.syncWithBindings
Default: ``false``
Type: *boolean*
Whether to synchronize the journal file with bindings.
If enabled then each time update\-config is called, the
bound stations will be synchronized with the current
journal file. Unbound stations will be removed from
the journal. Synchronizing with bindings will disable
reading the inventory.
.. _statusLog:
.. confval:: statusLog.enable
Default: ``false``
Type: *boolean*
Log information status information e.g.
max bytes buffered
.. confval:: statusLog.flush
Default: ``10``
Type: *uint*
Flush status every n seconds to disk
Bindings Parameters
===================
.. confval:: selectors
Type: *list:string*
List of stream selectors in format LOC.CHA.
If left empty all available streams will be requested.
Command-Line Options
====================
.. _Generic:
Generic
-------
.. option:: -h, --help
Show help message.
.. option:: -V, --version
Show version information.
.. option:: -D, --daemon
Run as daemon. This means the application will fork itself
and doesn't need to be started with \&.
.. _Verbosity:
Verbosity
---------
.. option:: --verbosity arg
Verbosity level [0..4]. 0:quiet, 1:error, 2:warning, 3:info,
4:debug.
.. option:: -v, --v
Increase verbosity level \(may be repeated, eg. \-vv\).
.. option:: -q, --quiet
Quiet mode: no logging output.
.. option:: -s, --syslog
Use syslog logging backend. The output usually goes to
\/var\/lib\/messages.
.. option:: -l, --lockfile arg
Path to lock file.
.. option:: --console arg
Send log output to stdout.
.. option:: --debug
Execute in debug mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 .
.. option:: --trace
Execute in trace mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 \-\-print\-component\=1
\-\-print\-context\=1 .
.. option:: --log-file arg
Use alternative log file.
.. _Records:
Records
-------
.. option:: --record-driver-list
List all supported record stream drivers.
.. option:: -I, --record-url arg
The recordstream source URL, format:
[service:\/\/]location[#type].
\"service\" is the name of the recordstream driver
which can be queried with \"\-\-record\-driver\-list\".
If \"service\" is not given, \"file:\/\/\" is
used.
.. option:: --record-file arg
Specify a file as record source.
.. option:: --record-type arg
Specify a type for the records being read.
.. _Output:
Output
------
.. option:: -O, --output arg
Overrides configuration parameter :confval:`output.address`.
This is the CAPS server which shall receive the data.
.. option:: --agent arg
Sets the agent string. Allows the server to identify the
application that sends data.
.. option:: -b, --buffer-size arg
Size \(bytes\) of the journal buffer. If the value ist
exceeded, a synchronization of the journal is forced.
.. option:: --backfilling arg
Default: ``0``
Buffer size in seconds for backfilling gaps.
.. option:: --mseed
Enable on\-the\-fly miniSEED encoding. If the encoder does not
support the input type of a packet, it will be forwarded.
Re\-encoding of miniSEED packets is not supported.
.. option:: --encoding arg
miniSEED encoding to use: Uncompressed, Steim1 or Steim2.
.. option:: --rec-len arg
miniSEED record length expressed as a power of
2. A 512 byte record would be 9.
.. option:: --max-future-endtime arg
Maximum allowed relative end time for packets. If the packet
end time is greater than the current time plus this value,
the packet will be discarded. By default this value is set
to 120 seconds.
.. option:: --dump-packets
Dump packets to stdout.
.. option:: --test
Disable socket communication.
.. option:: --dump
Dump all received data to stdout and don't use the input
port.
.. _Journal:
Journal
-------
.. option:: -j, --journal arg
File to store stream states. Use an empty string to log to
stdout.
.. option:: -f, --flush arg
Flush stream states to disk every n seconds.
.. option:: --wait-for-ack arg arg
Wait when a sync has been forced, up to n seconds.
.. option:: -w, --wait-for-last-ack arg
Wait on shutdown to receive acknownledgement messages, up to
the given number of seconds.
.. _Status:
Status
------
.. option:: --status-log
Log information status information, e.g., max bytes buffered.
.. option:: --status-flush arg
Flush status every n seconds to disk.
.. _Streams:
Streams
-------
.. option:: -A, --add-stream arg
StreamID [net.sta.loc.cha] to add.
.. option:: --id-file arg
File to read stream IDs from.
.. option:: --passthrough
Do not subscribe to any stream and accept everything a
record source is passing. This is useful in combination with
files.
.. option:: --begin arg
Start time of data time window.
.. option:: --end arg
End time of data time window.
.. option:: --days arg
Unit: *day*
Begin of data request time window given as days before current time.
Applied only on streams not found in the journal.
.. option:: --days-before arg
Unit: *day*
End of data request time window given as number of days
before current time.
.. _Polling:
Polling
-------
.. option:: --poll
For non\-streaming inputs polling can be
used to simulate real\-time streaming.
.. option:: --poll-window arg
Time window in seconds to be used with polling.
.. option:: --poll-interval arg
Time interval in seconds used for polling.
.. option:: --poll-serial
Will request each channel seperately rather all channels in
one request.

View File

@ -0,0 +1,353 @@
.. highlight:: rst
.. _rtpd2caps:
#########
rtpd2caps
#########
**CAPS import module for MRF packets from RTPD server.**
Description
===========
The RTPD plugin for CAPS collects DT and MRF packets through the REN protocol. It
is supposed to have very low latency suitable for real-time data transmission.
The RTPD plugin needs a configuration file which is usually created by its init
script. This configuration files lives under
:file:`$SEISCOMP_ROOT/var/lib/rtpd2caps.cfg`.
The init script reads the configuration from :file:`$SEISCOMP_ROOT/etc/rtpd2caps.cfg`
and the bindings from :file:`$SEISCOMP_ROOT/etc/key/rtpd2caps/*` and prepares the
above final configuration file.
The configuration used by rtpd2caps looks like this:
.. code-block:: sh
# Number of records to queue if the sink connection is not available
queue_size = 20000
# Define the channel mapping. Each item is a tuple of source id composed
# of stream and channel and target location and stream code. The target code
# can be a single channel code (e.g. HNZ) or a combination of location and
# channel code (e.g. 00.HNZ). In case of DT packets the sampling interval
# must be specified after the channel code separated by '@'
# channels = 1.0:HNZ, 1.1:HN1, 1.2:HN2 MRF
# Starts a particular unit configuration. channel mapping can be overridden
# in a unit section as well.
unit 200B3
# Defines the output network code for this unit.
network = "RT"
# Defines the output station code for this unit.
station = "TEST1"
# The RTPD server address.
address = 1.2.3.4:2543
# The CAPS server address.
sink = localhost:18003
# Another unit.
unit 200B4
network = "RT"
station = "TEST2"
address = 1.2.3.4:2543
sink = localhost
A user does not need to create this configuration file manually if using the
plugin integrated into |scname|. The rtpd2caps plugin can be configured as any other
|scname| module, e.g. via :program:`scconfig`.
An example |appname| configuration to generate the configuration above can look like this:
:file:`$SEISCOMP3_ROOT/etc/rtpd2caps.cfg`
.. code-block:: sh
# RTP server address in format [host]:[port]. If port is omitted, 2543 is
# assumed. This is optional and only used if the address in a binding is
# omitted.
address = 1.2.3.4
# CAPS server address to send data to in format [host]:[port]. If port is
# omitted, 18003 is assumed. This is optional and only used if the sink in a
# binding is omitted.
sink = localhost:18003
# Channel mapping list where each item maps a REFTEK stream/channel id to a
# SEED channel code with optional location code. Format:
# {stream}.{channel}:[{loc}.]{cha}, e.g. 1.0:00.HHZ. This is the default used
# if a station binding does not define it explicitly.
channels = 1.0:HNZ,1.1:HN1,1.2:HN2
# Number of packets that can be queued when a sink is not reachable.
queueSize = 20000
:file:`$SEISCOMP3_ROOT/etc/key/rtpd2caps/station_RT_TEST1`
.. code-block:: sh
# Mandatory REFTEK unit id (hex).
unit = 200B3
:file:`$SEISCOMP3_ROOT/etc/key/rtpd2caps/station_RT_TEST2`
.. code-block:: sh
# Mandatory REFTEK unit id (hex).
unit = 200B4
Test examples
=============
To test a server and check what packages are available, rtpd2caps can be ran
in test and verify mode.
.. code-block:: sh
$ rtpd2caps -H 1.2.3.4 --verify --test
Requested attributes:
DAS 'mask' (at_dasid) = 00000000
Packet mask (at_pmask) = 0x00004000
Stream mask (at_smask) = 0x0000FFFF
Socket I/O timeout (at_timeo) = 30
TCP/IP transmit buffer (at_sndbuf) = 0
TCP/IP receive buffer (at_rcvbuf) = 0
blocking I/O flag (at_block) = TRUE
2013:198-08:32:40 local [2195] Parameters:
2013:198-08:32:40 local [2195] * queue_size = 10000 records
2013:198-08:32:40 local [2195] * backfilling_buffer_size = 0s
2013:198-08:32:40 local [2195] Configured 1 source(s) and 0 sink(s)
[RTP 69.15.146.174:2543]
XX.YYYY unit 0
2013:198-08:32:40 local [2195] started reading from RTP server at 1.2.3.4:2543
2013:198-08:32:42 local [2195] Commands may not be sent
2013:198-08:32:42 local [2195] connected to 1.2.3.4:2543
Actual parameters:
DAS 'mask' (at_dasid) = 00000000
Packet mask (at_pmask) = 0x00004000
Stream mask (at_smask) = 0x0000FFFF
Socket I/O timeout (at_timeo) = 30
TCP/IP transmit buffer (at_sndbuf) = 0
TCP/IP receive buffer (at_rcvbuf) = 0
blocking I/O flag (at_block) = TRUE
200B3 stream 1
chamap: 7
chacnt: 3
cha : 99
dtype : 50
time : 2013.198 08:33:39.714000
nsamp : 20
bytes : 512
rate : 100
chans : 0, 1, 2
200B3 stream 1
chamap: 7
chacnt: 3
cha : 99
dtype : 50
time : 2013.198 08:33:39.914000
nsamp : 20
bytes : 512
rate : 100
chans : 0, 1, 2
200B3 stream 1
chamap: 7
chacnt: 3
cha : 99
dtype : 50
time : 2013.198 08:33:40.114000
nsamp : 20
bytes : 512
rate : 100
chans : 0, 1, 2
200B3 stream 1
chamap: 7
chacnt: 3
cha : 99
dtype : 50
time : 2013.198 08:33:40.314000
nsamp : 20
bytes : 512
rate : 100
chans : 0, 1, 2
...
Module Configuration
====================
.. note::
* rtpd2caps is a standalone module and does not inherit
:ref:`global options <global-configuration>`.
* Modules/plugins may require a license file. The default path to license
files is :file:`@DATADIR@/licenses/` which can be overridden by module
configuration of the parameter :confval:`gempa.licensePath`. Example: ::
gempa.licensePath = @CONFIGDIR@/licenses
| :file:`etc/defaults/rtpd2caps.cfg`
| :file:`etc/rtpd2caps.cfg`
| :file:`~/.seiscomp/rtpd2caps.cfg`
.. confval:: address
Type: *string*
RTP server address in format [host]:[port]. If port
is omitted, 2543 is assumed. This is optional and only used
if the address in a binding is omitted.
.. confval:: sink
Type: *string*
CAPS server address to send data to in format [host]:[port].
If port is omitted, 18003 is assumed. This is optional and only used
if the sink in a binding is omitted.
.. confval:: channels
Type: *list:string*
Channel mapping list where each item maps a REFTEK
stream\/channel id to a SEED channel code with optional
location code. Format: {stream}.{channel}:[{loc}.]{cha}, e.g.
1.0:00.HHZ. This is the default used if a station binding does
not define it explicitly.
.. confval:: queueSize
Default: ``10000``
Type: *int*
Number of packets that can be queued when a sink is not reachable.
.. confval:: backFillingBufferSize
Default: ``0``
Unit: *s*
Type: *int*
Length of backfilling buffer. Whenever a hole is detected, records
will be held in a buffer and not sent out. Records are flushed from
front to back if the buffer size is exceeded.
Bindings Parameters
===================
.. confval:: unit
Type: *string*
Mandatory REFTEK unit id \(hex\).
.. confval:: address
Type: *string*
RTP server address in format [host]:[port]. If port
is omitted, 2543 is assumed.
.. confval:: sink
Type: *string*
CAPS server address to send data to in format [host]:[port].
If port is omitted, 18003 is assumed.
.. confval:: channels
Type: *list:string*
Channel mapping list where each item maps a REFTEK
stream\/channel id to a SEED channel code with optional
location code. Format: {stream}.{channel}:[{loc}.]{cha}, e.g.
1.0:00.HHZ.
Command-Line Options
====================
.. _:
.. option:: -h, --help
Print program usage and exit.
.. option:: -H, --host address
RTP server to connect to in format [host]:[port]. If port
is omitted, 2543 is assumed.
.. option:: -S, --sink address
CAPS server to send data to in format [host]:[port]. If port
is omitted, 18003 is assumed.
.. option:: -n, --queue-size arg
Default: ``10000``
Maximum number of packages queued before
the sink connection becomes blocking.
.. option:: -b, --backfilling-buffer-size arg
Default: ``0``
Buffer size in seconds for backfilling holes.
.. option:: -s, --syslog
Logs to syslog.
.. option:: -f, --config-file path
Path to configuration file to be used.
.. option:: --log-file path
Path to log file.
.. option:: --verbosity level
Log verbosity, 4\=DEBUG, 3\=INFO, 2\=WARN, 1\=ERR, 0\=QUIET.
.. option:: --debug
Set log level to DEBUG and log everything to stderr.
.. option:: --verify
Dump package contents. This option is only useful for testing and debugging.
.. option:: --test
Do not send any data to CAPS.

View File

@ -0,0 +1,465 @@
.. highlight:: rst
.. _slink2caps:
##########
slink2caps
##########
**Data retrieval to CAPS using SeedLink plugins.**
Description
===========
*slink2caps* uses the available :term:`SeedLink` plugins to feed data from other
sources into :ref:`CAPS` server. Data can be retrieved from any sources for
which a :term:`SeedLink` plugin exists. The data will be converted into
:ref:`sec-pt-miniseed` format or other formats depending on the plugin itself.
For retrieving data from a :ref:`caps` server you may use :ref:`capstool` or
:ref:`rifftool`.
Transient Packets
=================
The plugin data acquisition and the CAPS outgoing connection is not
synchronized so that packets might be received by plugins but could not be forwarded to
CAPS since the plugin is not allowed to send data to the server or the server
is not reachable. In this case packets are in transient state and would be lost
on shutdown. To prevent packet loss the plugin stores all transient packets to
disk during shutdown by default. Configure :confval:`buffer` for using an
alternative location.
.. note::
Keep in mind to remove the buffer file before starting the plugin in case you
wish to reset the data acquisition completely.
Module Setup
============
Very few configuration steps are required as a minium to use slink2caps:
#. Identify the :term:`SeedLink` instance to consider. Let's assume in this
description this instance is :program:`seedlink`. In general you may use any
instance (alias) of :term:`SeedLink`.
#. Configure the :term:`SeedLink` instance to consider:
* Disable the time tables in module configuration (:file:`@SYSTEMCONFIGDIR@/seedlink.cfg`):
.. code-block:: properties
plugins.chain.loadTimeTable = false
* Create bindings as usual for :term:`SeedLink`.
#. If any other instance than :program:`seedlink` is considered, then you need
to configure the parameters in the :ref:`seedlink` section of the module
configuration of slink2caps.
#. Apply the configuration changes and stop the considered :term:`SeedLink`
instance. Then, enable and start slink2caps:
.. code-block:: sh
seiscomp stop seedlink
seiscomp disable seedlink
seiscomp update-config seedlink
seiscomp enable slink2caps
seisomp start slink2caps
.. warning::
The seedlink instance which is considered must not be running while using
slink2caps. E.g. when using :program:`seedlink` stop and disable this
instance first.
.. note ::
As for many other |scname| and gempa modules, you may create aliases from
slink2caps for running multiple instances with different configurations at
the same time. In this case you must adjust the :confval:`buffer` and the
:ref:`journal.*` parameters.
You may create or remove a module alias using the :program:`seiscomp` tool,
e.g.:
.. code-block:: sh
seiscomp alias create slink2caps-alias slink2caps
seiscomp --interactive alias remove slink2caps-alias
Module Configuration
====================
| :file:`etc/defaults/global.cfg`
| :file:`etc/defaults/slink2caps.cfg`
| :file:`etc/global.cfg`
| :file:`etc/slink2caps.cfg`
| :file:`~/.seiscomp/global.cfg`
| :file:`~/.seiscomp/slink2caps.cfg`
slink2caps inherits :ref:`global options<global-configuration>`.
.. note::
Modules/plugins may require a license file. The default path to license
files is :file:`@DATADIR@/licenses/` which can be overridden by global
configuration of the parameter :confval:`gempa.licensePath`. Example: ::
gempa.licensePath = @CONFIGDIR@/licenses
.. confval:: buffer
Default: ``@ROOTDIR@/var/lib/slink2caps/buffer.mseed``
Type: *path*
Path to buffer file where transient packets are stored on disk during shutdown.
Transient means packets that have been received from input plugins,
e.g., chain_plugin but have not been acknowledged by CAPS. Without local storage
on disk those packets would be lost. During start the plugin
reads the buffer file and tries to send the packets again. Please keep in mind
to remove the buffer file before plugin start in case of the data
acquisition should be reset.
.. _seedlink:
.. note::
**seedlink.\***
*Data input control*
.. confval:: seedlink.config
Default: ``@ROOTDIR@/var/lib/seedlink/seedlink.ini``
Type: *path*
Path to Seedlink configuration file. Use the respective name
if seedlink runs as an alias.
.. confval:: seedlink.name
Default: ``seedlink``
Type: *string*
Name of Seedlink configuration section. Use the respective name
if seedlink runs as an alias.
.. _output:
.. note::
**output.\***
*Data output control*
.. confval:: output.stdout
Default: ``false``
Type: *boolean*
Write miniSEED records to stdout instead of pushing them
to CAPS.
.. confval:: output.address
Default: ``localhost:18003``
Type: *string*
Data output URL [[caps\|capss]:\/\/][user:pass\@]host[:port]. This parameter
superseds the host and port parameter of previous versions and takes precedence.
.. confval:: output.host
Default: ``localhost``
Type: *string*
Data output host. Deprecated: Use output.address instead.
.. confval:: output.port
Default: ``18003``
Type: *int*
Data output port. Deprecated: Use output.address instead.
.. confval:: output.timeout
Default: ``60``
Unit: *s*
Type: *int*
Timeout when sending a packet. If the timeout expires
the connection will be closed and re\-established.
.. confval:: output.maxFutureEndTime
Default: ``120``
Unit: *s*
Type: *int*
Maximum allowed relative end time for packets. If the packet
end time is greater than the current time plus this value,
the packet will be discarded. By default this value is set
to 120 seconds.
.. confval:: output.bufferSize
Default: ``131072``
Unit: *bytes*
Type: *uint*
Size \(bytes\) of the packet buffer
.. confval:: output.backFillingBufferSize
Default: ``0``
Unit: *s*
Type: *int*
Length of backfilling buffer. Whenever a gap is detected, records
will be held in a buffer and not sent out. Records are flushed from
front to back if the buffer size is exceeded.
.. _journal:
.. confval:: journal.file
Default: ``@ROOTDIR@/var/run/rs2caps/journal``
Type: *string*
File to store stream states
.. confval:: journal.flush
Default: ``10``
Unit: *s*
Type: *uint*
Flush stream states to disk every n seconds
.. confval:: journal.waitForAck
Default: ``60``
Unit: *s*
Type: *uint*
Wait when a sync has been forced, up to n seconds
.. confval:: journal.waitForLastAck
Default: ``5``
Unit: *s*
Type: *uint*
Wait on shutdown to receive acknownledgement messages, up to n seconds
.. _statusLog:
.. confval:: statusLog.enable
Default: ``false``
Type: *boolean*
Log information status information e.g.
max bytes buffered
.. confval:: statusLog.flush
Default: ``10``
Type: *uint*
Flush status every n seconds to disk
Command-Line Options
====================
.. _Generic:
Generic
-------
.. option:: -h, --help
Show help message.
.. option:: -V, --version
Show version information.
.. option:: --config-file arg
Use alternative configuration file. When this option is
used the loading of all stages is disabled. Only the
given configuration file is parsed and used. To use
another name for the configuration create a symbolic
link of the application or copy it. Example:
scautopick \-> scautopick2.
.. option:: --plugins arg
Load given plugins.
.. option:: -D, --daemon
Run as daemon. This means the application will fork itself
and doesn't need to be started with \&.
.. option:: -f, --seedlink-config arg
Default: ``@ROOTDIR@/var/lib/seedlink/seedlink.ini``
Path to Seedlink configuration file. Default: \@ROOTDIR\@\/var\/lib\/seedlink\/seedlink.ini
.. option:: -n, --section-name arg
Default: ``seedlink``
Name of Seedlink configuration section. Default: seedlink.
.. _Verbosity:
Verbosity
---------
.. option:: --verbosity arg
Verbosity level [0..4]. 0:quiet, 1:error, 2:warning, 3:info,
4:debug.
.. option:: -v, --v
Increase verbosity level \(may be repeated, eg. \-vv\).
.. option:: -q, --quiet
Quiet mode: no logging output.
.. option:: --print-component arg
For each log entry print the component right after the
log level. By default the component output is enabled
for file output but disabled for console output.
.. option:: --component arg
Limit the logging to a certain component. This option can
be given more than once.
.. option:: -s, --syslog
Use syslog logging backend. The output usually goes to
\/var\/lib\/messages.
.. option:: -l, --lockfile arg
Path to lock file.
.. option:: --console arg
Send log output to stdout.
.. option:: --debug
Execute in debug mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 .
.. option:: --trace
Execute in trace mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 \-\-print\-component\=1
\-\-print\-context\=1 .
.. option:: --log-file arg
Use alternative log file.
.. _Output:
Output
------
.. option:: -H, --host arg
Default: ``localhost``
Data output host. Default: localhost.
.. option:: -p, --port arg
Default: ``18003``
Data output port. Default: 18003.
.. option:: -c, --stdout
Write records to stdout.
.. option:: --max-future-endtime arg
Maximum allowed relative end time for packets. If the packet
end time is greater than the current time plus this value,
the packet will be discarded. By default this value is set
to 120 seconds.

View File

@ -0,0 +1,687 @@
.. highlight:: rst
.. _sproc2caps:
##########
sproc2caps
##########
**Recordstream data acquisition plugin that applies filter and/or a
mathematical expression to one or more data streams forming new streams**
Description
===========
The sproc2caps plugin requests data from a |scname| :cite:t:`recordstream` in
real time or based on :ref:`time windows <sproc-tw>`,
:ref:`filters the data<sproc-filter>` and/or applies
:ref:`mathematical expressions <sproc-expressions>`. The processed data is sent
to a CAPS server or to stdout. Streams can be :ref:`renamed <sproc-rename>`.
Setup
=====
Streams
-------
The plugin reads the streams to subscribe to from a separate stream map file.
The location of the file can be either defined in the plugin configuration or
given as a command line argument:
.. code-block:: bash
streams.map = @DATADIR@/sproc2caps/streams.map
Each line of the stream map file defines n input streams and one output stream.
By definition at least one input and one output stream must be given. The last
argument in a line is the output stream. All other lines are input stream. Lines
beginning with a comment are ignored.
.. note::
The map file is required even if the stream codes remain the same. Without an
entry in the map file the input streams are not treated.
Example map file:
.. code-block:: bash
#Input 1 Input 2 ... Output
XX.TEST1..HHZ XX.TEST2..HHZ ... XX.TEST3..HHZ
Each stream entry may contain additional stream options e.g. for
:ref:`data filtering <sproc-filter>`. Options are indicated by "?".
The following stream options are supported:
====== ============================================= ========
Name Description Example
====== ============================================= ========
filter Filter string filter=BW(4,0.7,2)
unit Output unit unit=cm/s
expr Expression to be used(Output only) expr=x1+x2
====== ============================================= ========
Examples of streams with stream options:
.. code-block:: bash
XX.TEST1..HHZ?filter=BW_HP(4,0.1)
XX.TEST2..HHZ?filter=BW_HP(4,0.1)
XX.TEST3..HHZ?filter=BW(4,0.7,2)?unit=cm/s
For the given example the plugin assigns the following access variables to
the streams. The access variables can be used in the mathematical expression string.
The *unit* option provides an additional description of the stream. *unit*
does not modify the stream.
Access variables for N input streams:
========= ============= ============= === =============
map,input input 1 input 2 ... input N
========= ============= ============= === =============
stream XX.TEST1..HHZ XX.TEST2..HHZ ... XX.TESTN..HHZ
variable x1 x2 ... xN
========= ============= ============= === =============
When the mathematical expression is evaluated the xi will be replaced with the
sample of the corresponding stream at the sample time. The maximum number of
input streams is 3.
.. _sproc-filter:
Filtering
---------
Input data can be filtered before :ref:`mathematical expressions <sproc-expressions>`
are applied. Filter grammar and all filters :cite:p:`filter-grammar` known from
|scname| can be considered. By default input data remain unfiltered.
Example for setting the filter in the map file:
.. code-block:: bash
XX.TEST1..HHZ?filter=BW(4,0.7,2) XX.TEST2..HHZ XX.TEST3..HHZ
.. _sproc-expressions:
Expressions
-----------
The sproc plugin uses the C++ Mathematical Expression Library to evaluate
mathematical expressions. The library supports a wide range of mathematical
expressions. The complete feature list can be found here_. The number of
input variables depends on the number of input streams. The variables are numbered
consecutively from 1 to n: x1, x2, ..., xn.
Example how to multiply 3 streams:
- via command-line:
.. code-block:: bash
--expr="x1*x2*x3"
- via config:
.. code-block:: bash
streams.expr = x1*x2*x3
- via stream options:
.. code-block:: bash
XX.TEST1..HHZ XX.TEST2..HHZ XX.TEST3..HHZ XX.TESTOUT..HHZ?expr=x1*x2*x3
.. _here: http://www.partow.net/programming/exprtk/
.. _sproc-rename:
Rename Streams
--------------
In addition to applying mathematical expressions to streams, the plugin can be also used to
rename streams. With the following example we show how to map the streams **GE.APE..BHE** and
**GE.BKNI..BHE** to new stream ids and store the output streams in the same CAPS server:
1. Open the plugin configuration and create a clone of the input data stream with:
.. code-block:: bash
streams.expr = x1
#. Create the mapping file **@DATADIR@/sproc2caps/streams.map** with the following content
.. code-block:: bash
# Input Output
GE.APE..BHE AB.APE..BHE
GE.BKNI..BHE GE.BKNI2..BHE
.. _sproc-tw:
Time windows
------------
Set the time windows using *--begin* and *--end* to set the start and the end times,
respectively. When no time window is given, real-time input data are considered.
Examples
========
#. To map waveform data for a specific time window reading from a local CAPS server on
localhost:18002 and sending to the plugin port of the same CAPS server on localhost:18003 run:
.. code-block:: bash
sproc2caps --begin "2019-01-01 00:00:00" --end "2019-01-01 01:00:00" -I "caps://localhost:18002" -a localhost:18003
This will create duplicate data on the CAPS server if the map file renames the streams.
To remove the original streams:
1. Configure caps to keep the orignal data for 0 days
#. Restart or reload caps
#. Read real-time data from an external seedlink server like with
:cite:p:`slink2caps` but applying the mapping:
.. code-block:: bash
sproc2caps -I "slink://host:18000" -a localhost:18003
#. Read data from the file *data.mseed* resample to 10 Hz sample rate by the
RecordStream and write the resulting data to stdout. By applying
:option:`--stop` the processing stops when the data is read completely:
.. code-block:: bash
sproc2caps -I dec://file?rate=10/data.mseed -d localhost --gain-in 1 --gain-out 1 --dump-packets --mseed --begin "2000-01-01 00:00:00" --stop > test.mseed
You may join the command with :cite:t:`capstool` and :cite:t:`scmssort`:
.. code-block:: bash
echo "2024,01,01,00,00,00 2024,01,01,00,10,00 * * * *" | capstool -H localhost |\
sproc2caps -I dec://file?rate=10/- -d localhost --gain-in 1 --gain-out 1 --dump-packets --mseed --begin "2000-01-01 00:00:00" --stop |\
scmssort -E > test.mseed
.. note::
A similar action may be executed using :ref:`rs2caps`.
Module Configuration
====================
| :file:`etc/defaults/global.cfg`
| :file:`etc/defaults/sproc2caps.cfg`
| :file:`etc/global.cfg`
| :file:`etc/sproc2caps.cfg`
| :file:`~/.seiscomp/global.cfg`
| :file:`~/.seiscomp/sproc2caps.cfg`
sproc2caps inherits :ref:`global options<global-configuration>`.
.. note::
Modules/plugins may require a license file. The default path to license
files is :file:`@DATADIR@/licenses/` which can be overridden by global
configuration of the parameter :confval:`gempa.licensePath`. Example: ::
gempa.licensePath = @CONFIGDIR@/licenses
.. _journal:
.. confval:: journal.file
Default: ``@ROOTDIR@/var/run/sproc2caps/journal``
Type: *string*
File to store stream states
.. confval:: journal.flush
Default: ``10``
Unit: *s*
Type: *uint*
Flush stream states to disk every n seconds
.. confval:: journal.waitForAck
Default: ``60``
Unit: *s*
Type: *uint*
Wait when a sync has been forced, up to n seconds
.. confval:: journal.waitForLastAck
Default: ``5``
Unit: *s*
Type: *uint*
Wait on shutdown to receive acknownledgement messages, up to n seconds
.. _streams:
.. note::
**streams.\***
*Configure operations applied to input streams and the stream mapping.*
.. confval:: streams.begin
Type: *string*
Start time of data time window, default 'GMT'
.. confval:: streams.end
Type: *string*
End time of data time window
.. confval:: streams.filter
Default: ``self``
Type: *string*
Sets the input filter
.. confval:: streams.expr
Default: ``x1 + x2``
Type: *string*
Sets the mathematical expression
.. confval:: streams.map
Default: ``@DATADIR@/sproc2caps/streams.map``
Type: *string*
Absolute path to the stream map file. Each line
holds n input streams and one output stream.
Example:
CX.PB11..BHZ CX.PB11..BHZ
CX.PB11..BHZ CX.PB07..BHZ CX.PB11..BBZ
.. _output:
.. note::
**output.\***
*Configure the data output.*
.. confval:: output.address
Default: ``localhost:18003``
Type: *string*
Data output URL [[caps\|capss]:\/\/][user:pass\@]host[:port]. This parameter
superseds the host and port parameter of previous versions and takes precedence.
.. confval:: output.host
Default: ``localhost``
Type: *string*
Data output host. Deprecated: Use output.address instead.
.. confval:: output.port
Default: ``18003``
Type: *int*
Data output port. Deprecated: Use output.address instead.
.. confval:: output.bufferSize
Default: ``1048576``
Unit: *B*
Type: *uint*
Size \(bytes\) of the packet buffer
.. confval:: output.backfillingBufferSize
Default: ``180``
Unit: *s*
Type: *uint*
Length of backfilling buffer. Whenever a gap is detected, records
will be held in a buffer and not sent out. Records are flushed from
front to back if the buffer size is exceeded.
.. _output.mseed:
.. confval:: output.mseed.enable
Default: ``true``
Type: *boolean*
Enable on\-the\-fly MiniSeed
encoding. If the encoder does not support the input
type of a packet it will be forwarded. Re encoding of
MiniSEED packets is not supported.
.. confval:: output.mseed.encoding
Default: ``Steim2``
Type: *string*
MiniSEED encoding to use. \(Uncompressed, Steim1 or Steim2\)
.. _statusLog:
.. confval:: statusLog.enable
Default: ``false``
Type: *boolean*
Log information status information e.g.
max bytes buffered
.. confval:: statusLog.flush
Default: ``10``
Unit: *s*
Type: *uint*
Flush status every n seconds to disk
Command-Line Options
====================
.. _Generic:
Generic
-------
.. option:: -h, --help
Show help message.
.. option:: -V, --version
Show version information.
.. option:: -D, --daemon
Run as daemon. This means the application will fork itself
and doesn't need to be started with \&.
.. _Verbosity:
Verbosity
---------
.. option:: --verbosity arg
Verbosity level [0..4]. 0:quiet, 1:error, 2:warning, 3:info,
4:debug.
.. option:: -v, --v
Increase verbosity level \(may be repeated, eg. \-vv\).
.. option:: -q, --quiet
Quiet mode: no logging output.
.. option:: -s, --syslog
Use syslog logging backend. The output usually goes to
\/var\/lib\/messages.
.. option:: -l, --lockfile arg
Path to lock file.
.. option:: --console arg
Send log output to stdout.
.. option:: --debug
Execute in debug mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 .
.. option:: --log-file arg
Use alternative log file.
.. _Records:
Records
-------
.. option:: --record-driver-list
List all supported record stream drivers.
.. option:: -I, --record-url arg
The recordstream source URL, format:
[service:\/\/]location[#type].
\"service\" is the name of the recordstream driver
which can be queried with \"\-\-record\-driver\-list\".
If \"service\" is not given, \"file:\/\/\" is
used.
.. option:: --record-file arg
Specify a file as record source.
.. option:: --record-type arg
Specify a type for the records being read.
.. _Output:
Output
------
.. option:: -O, --output arg
Overrides configuration parameter :confval:`output.address`.
This is the CAPS server which shall receive the data.
.. option:: --agent arg
Sets the agent string. Allows the server to identify the
application that sends data.
.. option:: -b, --buffer-size arg
Size \(bytes\) of the journal buffer. If the value ist
exceeded, a synchronization of the journal is forced.
.. option:: --backfilling arg
Default: ``0``
Buffer size in seconds for backfilling gaps.
.. option:: --mseed
Enable on\-the\-fly miniSEED encoding. If the encoder does not
support the input type of a packet, it will be forwarded.
Re\-encoding of miniSEED packets is not supported.
.. option:: --encoding arg
miniSEED encoding to use: Uncompressed, Steim1 or Steim2.
.. option:: --rec-len arg
miniSEED record length expressed as a power of
2. A 512 byte record would be 9.
.. option:: --max-future-endtime arg
Maximum allowed relative end time for packets. If the packet
end time is greater than the current time plus this value,
the packet will be discarded. By default this value is set
to 120 seconds.
.. option:: --dump-packets
Dump packets to stdout.
.. _Journal:
Journal
-------
.. option:: -j, --journal arg
File to store stream states. Use an empty string to log to
stdout.
.. option:: --flush arg
Flush stream states to disk every n seconds.
.. option:: --wait-for-ack arg arg
Wait when a sync has been forced, up to n seconds.
.. option:: -w, --wait-for-last-ack arg
Wait on shutdown to receive acknownledgement messages, up to
the given number of seconds.
.. _Status:
Status
------
.. option:: --status-log
Log information status information, e.g., max bytes buffered.
.. option:: --status-flush arg
Flush status every n seconds to disk.
.. option:: --stop
Stop processing when data acquisition is finished. The
'finished' signal depends on data source.
.. _Streams:
Streams
-------
.. option:: --begin arg
Start time of data time window.
.. option:: --end arg
End time of data time window.
.. option:: --map arg
Stream map file.
.. option:: --expr arg
Mathematical expression to be applied.
.. _Test:
Test
----
.. option:: --gain-in arg
Gain that is applied to the input values.
.. option:: --gain-out arg
Gain that is applied to the output values.

View File

@ -0,0 +1,224 @@
.. highlight:: rst
.. _test2caps:
#########
test2caps
#########
**Recordstream data acquisition plugin**
Description
===========
:program:`test2caps` allows to generate test signals which are send to a CAPS
server. This plugin is useful for testing and developing data acquisition or
processing modules.
Module Configuration
====================
| :file:`etc/defaults/global.cfg`
| :file:`etc/defaults/test2caps.cfg`
| :file:`etc/global.cfg`
| :file:`etc/test2caps.cfg`
| :file:`~/.seiscomp/global.cfg`
| :file:`~/.seiscomp/test2caps.cfg`
test2caps inherits :ref:`global options<global-configuration>`.
.. note::
Modules/plugins may require a license file. The default path to license
files is :file:`@DATADIR@/licenses/` which can be overridden by global
configuration of the parameter :confval:`gempa.licensePath`. Example: ::
gempa.licensePath = @CONFIGDIR@/licenses
Command-Line Options
====================
.. _General:
General
-------
.. option:: -h, --help
Print help message
.. option:: --config file
File to read configuration from
.. option:: -h, --help
Print help message
.. option:: --verbosity arg
Verbosity level [0..4]
.. _Stream:
Stream
------
.. option:: --id arg
Comma separated list of stream IDs [net.sta.loc.cha] to use
.. option:: --id-file arg
File to read stream IDs from
.. option:: --begin arg
Start date and time of data stream
.. option:: --interval arg
Sampling interval to use, format is numerator\/denominator
.. _Mode:
Mode
----
.. option:: --read-from arg
File to read data from
.. option:: --random arg
Generate n random samples
.. option:: --stream
Generate continuous sinus data
.. option:: --amplitude arg
Amplitude of sinus data
.. option:: --period arg
Period of sinus data in seconds
.. _Packets:
Packets
-------
.. option:: --data-type arg
.. option:: arg
Data type to use. Available are: INT8, INT32, FLOAT, DOUBLE
.. option:: --fill arg
Number of seconds of data to send before\/after start time
.. option:: --format arg
Format description 4 characters, e.g. 'JPEG'
.. option:: --mseed
Enable Steim2 encoding for RAW packets
.. option:: --recsize arg
Record size in samples in stream mode
.. option:: -q, --quality arg
Record timing quality
.. option:: --type arg
Packet type to use, e.g. ANY, RAW
.. _Output:
Output
------
.. option:: -H, --Host arg
Data output host
.. option:: -p, --port arg
Data output port
.. _Verbosity:
Verbosity
---------
.. option:: --verbosity arg
Verbosity level [0..4]. 0:quiet, 1:error, 2:warning, 3:info,
4:debug.
.. _Output:
Output
------
.. option:: -H, --host arg
Data output host
.. option:: -p, --port arg
Data output port
.. _Journal:
Journal
-------
.. option:: -j, --journal arg
File to store stream states. Use an empty string to log to standard out.
.. option:: -f, --flush arg
Flush stream states to disk every n seconds
.. option:: --waitForAck arg
Wait when a sync has been forced, up to n seconds
.. option:: -w, --waitForLastAck arg
Wait on shutdown to receive acknownledgement messages, up to n seconds

View File

@ -0,0 +1,244 @@
.. highlight:: rst
.. _v4l2caps:
########
v4l2caps
########
**Video for Linux capture plugin**
Description
===========
Video for Linux is a video capture application programming interface(API) and
library for Linux. The library supports many USB web cams, TV tuners as
well as other devices and is the common way to access multimedia devices under
Linux. The v4l2caps plugin uses the Video for Linux API to capture frames from
compatible hardware devices and stores each frame into CAPS.
Available resolutions, pixel formats and other parameters depend on the used
device. See the manual of the hardware manufacturer for more details.
The capture process of the plugin gets a frame in a given sampling interval.
For each frame a new ANY packet is created which uses the sampling time
of the frame as start and end time for the packet. In addition the format of the
packet is set to the selected pixel format. Frame drops may occur when the
storage system is not fast enough to handle incoming data.
Examples
========
To capture 15 images per second (the maximum number of images depends on your hardware) and store the output into CAPS use:
.. code-block:: sh
$ v4l2caps -s SW.HMA.317.CAM --interval 15/1
Module Configuration
====================
| :file:`etc/defaults/global.cfg`
| :file:`etc/defaults/v4l2caps.cfg`
| :file:`etc/global.cfg`
| :file:`etc/v4l2caps.cfg`
| :file:`~/.seiscomp/global.cfg`
| :file:`~/.seiscomp/v4l2caps.cfg`
v4l2caps inherits :ref:`global options<global-configuration>`.
.. note::
Modules/plugins may require a license file. The default path to license
files is :file:`@DATADIR@/licenses/` which can be overridden by global
configuration of the parameter :confval:`gempa.licensePath`. Example: ::
gempa.licensePath = @CONFIGDIR@/licenses
.. confval:: host
Default: ``localhost``
Type: *string*
Data output host
.. confval:: port
Default: ``18003``
Type: *uint*
Data output host
.. confval:: streamID
Type: *string*
Stream ID to use, format is [net.sta.loc.cha]
.. confval:: resolution
Type: *string*
Resolution to use
.. confval:: outputFormat
Type: *string*
Output format to use [rgb, jpg]
.. confval:: outputQuality
Default: ``100``
Type: *int*
Output quality to use [0\-100]
.. confval:: interval
Type: *uint*
Sampling interval to use, format is [Denominator\/Numerator]
.. confval:: pixelFormat
Type: *string*
Pixel format to use, expected as four character code[ABCD]
.. confval:: count
Default: ``0``
Type: *uint*
Number of frames to grab
.. confval:: skip
Default: ``0``
Type: *uint*
Number of frames to skip
.. confval:: bufferSize
Default: ``1048576``
Type: *uint*
Size \(bytes\) of the internal buffer to keep still unconfirmed packages
.. confval:: device
Default: ``/dev/video0``
Type: *string*
Video device name
.. confval:: io
Default: ``1``
Type: *uint*
I\/O method. 0: Use read function, 1: Use memory mapped buffers
Command-Line Options
====================
.. _:
.. option:: -b, --buffer-size arg
Size \(bytes\) of the internal buffer to keep still unconfirmed packages
.. option:: -c, --count arg
Number of frames to grab
.. option:: --config arg
Path to configuration file
.. option:: -d, --device arg
Video device name
.. option:: --dump arg
Dump output to file
.. option:: -f, --pixel-format arg
Pixel format to use, expected as four character code[ABCD]
.. option:: -F, --output-format arg
Output format to use [rgb, jpg]
.. option:: -H, --host arg
Data output host
.. option:: -h, --help arg
Print help
.. option:: --info arg
Print device info
.. option:: -i, --interval arg
Sampling interval to use, format is [Denominator\/Numerator]
.. option:: --io arg
I\/O method. 0: Use read function, 1: Use memory mapped buffers [default]
.. option:: -p, --port arg
Data output port
.. option:: -q, --output-quality arg
Output quality to use [0\-100]
.. option:: -r, --resolution arg
Resolution to use
.. option:: -s, --stream-id arg
Stream ID to use, format is [net.sta.loc.cha]
.. option:: -S, --skip arg
Number of frames to skip

View File

@ -0,0 +1,223 @@
.. highlight:: rst
.. _win2caps:
########
win2caps
########
**WIN CAPS plugin. Sends data read from socket or file to CAPS.**
Module Configuration
====================
| :file:`etc/defaults/global.cfg`
| :file:`etc/defaults/win2caps.cfg`
| :file:`etc/global.cfg`
| :file:`etc/win2caps.cfg`
| :file:`~/.seiscomp/global.cfg`
| :file:`~/.seiscomp/win2caps.cfg`
win2caps inherits :ref:`global options<global-configuration>`.
.. note::
Modules/plugins may require a license file. The default path to license
files is :file:`@DATADIR@/licenses/` which can be overridden by global
configuration of the parameter :confval:`gempa.licensePath`. Example: ::
gempa.licensePath = @CONFIGDIR@/licenses
.. _input:
.. confval:: input.port
Default: ``18000``
Type: *uint*
Listen for incoming packets at given port
.. _output:
.. confval:: output.host
Default: ``localhost``
Type: *string*
Data output host
.. confval:: output.port
Default: ``18003``
Type: *int*
Data output port
.. confval:: output.bufferSize
Default: ``1048576``
Type: *uint*
Size \(bytes\) of the packet buffer
.. _streams:
.. confval:: streams.file
Type: *string*
File to read streams from. Each line defines a mapping between a station and stream id. Line format is [ID NET.STA].
Command-Line Options
====================
.. _Generic:
Generic
-------
.. option:: -h, --help
Show help message.
.. option:: -V, --version
Show version information.
.. option:: --config-file arg
Use alternative configuration file. When this option is
used the loading of all stages is disabled. Only the
given configuration file is parsed and used. To use
another name for the configuration create a symbolic
link of the application or copy it. Example:
scautopick \-> scautopick2.
.. _Verbosity:
Verbosity
---------
.. option:: --verbosity arg
Verbosity level [0..4]. 0:quiet, 1:error, 2:warning, 3:info,
4:debug.
.. option:: -v, --v
Increase verbosity level \(may be repeated, eg. \-vv\).
.. option:: -q, --quiet
Quiet mode: no logging output.
.. option:: --print-component arg
For each log entry print the component right after the
log level. By default the component output is enabled
for file output but disabled for console output.
.. option:: --component arg
Limit the logging to a certain component. This option can
be given more than once.
.. option:: -s, --syslog
Use syslog logging backend. The output usually goes to
\/var\/lib\/messages.
.. option:: -l, --lockfile arg
Path to lock file.
.. option:: --console arg
Send log output to stdout.
.. option:: --debug
Execute in debug mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 .
.. option:: --trace
Execute in trace mode.
Equivalent to \-\-verbosity\=4 \-\-console\=1 \-\-print\-component\=1
\-\-print\-context\=1 .
.. option:: --log-file arg
Use alternative log file.
.. _Input:
Input
-----
.. option:: --station arg
Sets the station and sampling interval to use. Format is [net.sta\@?]
.. option:: -f, --file arg
Load CREX data directly from file
.. option:: --read-from arg
Read packets from this file
.. option:: --port arg
Listen for incoming packets at given port
.. _Output:
Output
------
.. option:: -H, --host arg
Data output host
.. option:: -p, --port arg
Data output port
.. _Streams:
Streams
-------
.. option:: --streams-file arg
File to read streams from. Each line defines a mapping between a station and stream id. Line format is [ID NET.STA].

View File

@ -0,0 +1,132 @@
# Change Log
## 2025.101
### Fixed
- Improved performance significantly when requesting many channels from
an upstream caps server.
## 2025.069
### Changed
- Ported code to latest SeisComP API 17 and fix deprecation
warnings.
## 2024.262
### Fixed
- MiniSEED encoding allows half a sample timing tolerance
to detect contiguous records.
## 2023.234
### Added
- Command-line help and more module documentation.
## 2023.257
### Added
- New configuration option `maxRealTimeGap` and `marginRealTimeGap`.
They allow to configure a dedicated backfilling stream and to
prefer real-time data. The consequence is the reception of
out of order records at clients.
## 2022-02-28
### Added
- New config option `timeWindowUpdateInterval`. This option
sets the interval in seconds at which the relative request
time window defined by option `days` and/or `daysBefore` is
updated. Use a value less or equal zero to disable the update.
This feature is supported in archive mode only.
A typical use case is when data has to be transmitted
continuously with a time delay.
```bash
timeWindowUpdateInterval=86400
```
## 2022-02-25
### Fixed
- Wrong time window subscription after reconnect
## 2020-12-22
### Added
- Configuration description for daysBefore in e.g. scconfig
## 2020-12-17
### Added
- New config option `daysBefore` which can be used to set the end time
of the data acquisition time window n days before the current time, e.g.,
``` bash
daysBefore=10
```
## 2020-02-17
### Changed
- Increase default timeout for acknowledgement messages from 5s to 60s
- Use microsecond precision in data requests
## 2020-02-12
### Added
- Backfilling buffer which is a tool to mitigate out-of-order data. Whenever a
gap is detected, records will be held in a buffer and not sent out. Records
are flushed from front to back if the buffer size is exceeded.
## 2020-02-10
### Changed
- Subscribe to streams even if the requested end time is before the last
received timestamp. This is necessary to do not request data again in
case of wildcard requests.
## 2018-08-05
### Fixed
- segfault in journal file parser
- corrupt journal files
## 2018-03-19
### Added
- SSL support for outgoing connections
## 2018-03-14
### Fixed
- The journal file will be stored by default at @ROOTDIR@/var/run/[name]/journal
where name is the name of the application. In standard cases it is ```caps2caps```
but not with aliases in use.
## 2017-03-21
### Fixed
- stream recovery in case of wild card request
## 2017-02-14
### Added
- out-of-order support

View File

@ -0,0 +1,58 @@
# Change Log
All notable changes to the python plugins will be documented in this file.
## 2024.262
### Fixed
- MiniSEED encoding allows half a sample timing tolerance
to detect contiguous records.
## 2024.005
### Added
- data2caps
- Document the processing of SLIST files with multiple data blocks.
## 2023.317
### Changed
- image2caps
- Enforce Python3
## 2023.298
### Added
- data2caps
- Allow setting the network code explicitly by --network`.
### Changed
- data2caps
- Read the sample rate numerator and denominator separately instead of
assuming denominator = 1.
- For format unavco 1.0 the network code must be given explicitly.
### Fixed
- data2caps
- Send data in unavco data format which was not done before.
## 2023.255
### Added
- data2caps
- Renamed from raw2caps.
- Support reading slist files, add documentation.
- Support reading strain & seismic data files from www.unavco.org.

View File

@ -0,0 +1,146 @@
# Change Log
All notable changes to rs plugin will be documented in this file.
## 2025.051
### Added
- Option `days` that allows to set the start time of the data
time window n days before the current time., e.g.,
``` bash
days = 1
```
- Option `daysBefore` that allows to set the end time of the data
time window n days before the current time., e.g.,
``` bash
daysBefore = 1
```
## 2024.262
### Fixed
- MiniSEED encoding allows half a sample timing tolerance
to detect contiguous records.
## 2024.173
### Added
- Config option `streams.passthrough`. Until now, the feature could only
be activated via a command line option.
## 2024.156
### Important
- The command-line option `--addr`/`-a` has been renamed to
`--output`/`-O` in order to be consistent with other applications like
caps2caps. Scripts/processes using this parameter must be adjusted.
## 2023.254
### Added
- Make `output.maxFutureEndTime` configurable in scconfig.
## 2023.135
### Fixed
- Inventory subscription
## 2022.332
### Added
- Add poll mode for non-real-time inputs, e.g. fdsnws.
## 2021-04-29
### Added
- Add SSL and authentication support for the output connection.
With this version the data output URL can be set with the
config option ``output.address``. The formal definition
of the field is: [[caps|capss]://][user:pass@]host[:port] e.g.
```
output.address = capss://caps:caps@localhost:18003
```
The new output.address parameter superseds the output.host and
output.port parameter of previous versions and takes precedence.
The old parameters are kept for compatibility reasons but are
marked as deprecated.
## 2020-02-17
### Changed
- Increase default timeout for acknowledgement messages from 5s to 60s
## 2020-01-23
### Fixed
- Make init script Python 2 and 3 compatible
## 2019-08-07
### Added
- plugin version information
## 2019-01-30
### Fixed
- Loading inventory from file
## 2018-12-17
### Added
- Added new option ``--status-log``. With this option enabled
the plugin writes status information e.g. the number of bytes
buffered into a seperate log file ``@LOGDIR@/rs2caps-stats.log``.
## 2018-01-24
### Added
- Optimized config script in combination with high station count
## 2018-01-17
### Added
- Added optional bindings to synchronize the journal file with
- Added option to synchronize journal file with bindings
## 2016-06-08
### Added
- Added option ``--passthrough`` which will not read the inventory from
the database and thus does not require a database connection and it will
not subscribe to any stream at the recordstream. Instead it will process
everything that it receives. This is most useful in combination with files.
## 2016-06-03
### Added
- backfilling support
## 2016-05-25
### Added
- support to load inventory from file or database. The configuration may be
adopted using the standard SeisComP3 options.

View File

@ -0,0 +1,75 @@
# Change Log
All notable changes to the slink2caps plugin will be documented in this file.
## 2023.254
### Added
- Make `output.maxFutureEndTime` configurable in scconfig.
## 2022.340
### Added
- Support all Seedlink 3.3 features which includes plugin proc
definitions
## 2022-05-23
### Added
- CAPS authentication support for outgoing connections.
Use the config option ``output.address`` to provide
the data output URL in the form
[[caps|capss]://][user:pass@]host[:port], e.g.:
```
output.address = caps://user:pass@localhost:18003
```
## 2022-04-14
### Changed
- Transient packets will be written to disk during shutdown to
prevent packet loss
## 2022-04-07
### Fixed
- Shutdown in case of no data could be sent to CAPS
## 2022-03-03
### Fixed
- Fixed usage of `output.recordBufferSize` which hasn't been used yet
- Set default buffer size to 128k
## 2021-11-23
### Fixed
- First shutdown plugins and then stop caps connection to avoid lost
records during shutdown
## 2019-09-24
### Fixed
- Flush all transient packages before closing the connection to CAPS at exit
## 2019-05-06
### Fixed
- Capturing of SeedLink plugins logs. Under certain conditions the data
acquisition could be affected causing packet loss.
## 2019-03-12
### Added
- Capture SeedLink plugins logs

View File

@ -0,0 +1,997 @@
# Change Log
All notable changes to CAPS will be documented in this file.
Please note that we have changed the date format from year-month-day
to year.dayofyear to be in sync with `caps -V`.
## 2025.232
- Fix data retrieval at the beginning of a year with archive files that start
after the requested start time but at the same day.
## 2025.199
- Fix station lookup in web application v2. This bug lead to stations symbols
placed in a arbitrarily fixed grid and wrong plots.
- Add preferred nodalplane to focalmechanism page in OriginLocatorView v2.
## 2025.135
- Fix datafile header CRC computation.
## 2025.128
- Relax NSLC uppercase requirement for FDSNWS dataselect request.
## 2025.112
- Fix crash in combination with `caps --read-only`.
## 2025.101
- Add option `AS.filebase.params.concurrency` to write to the archive
concurrently multi-threaded. This can improve performance with some
storage technologies such as SSD / NVMe under very high load or with
high latency storage devices such as network connected storages under
moderate load.
- Optimized write performance by reducing and combining page updates.
## 2024.290
- Add Option to purge data via CAPS protocol API. Only users with the `purge`
permission can delete data from archive.
## 2024.269
- Fixed crash on inserting data under some still unclear circumstances.
## 2024.253
- Add more robust checks to detect corrupted files caused by, e.g.
faulty storages or hardware failures/crashes. Corrupt files could have
caused segmentation faults of `caps`.
## 2024.215
- Fix webfrontend bug if `AS.http.fdsnws` is specified. This
bug prevented the webfrontend to load.
## 2024.183
- Add record filter options to rifftool data dump mode
## 2024.151
- Improve logging for plugin port: add IP and port to disconnect
messages and log disconnection requests from the plugin to
INFO level.
## 2024.143
- Fix issue with merging raw records after a restart
## 2024.096
- Attempt to fix dashboard websocket standing connection counter
## 2024.094
- Fix errors when purging a datafile which is still active
## 2024.078
- Ignore records without start time and/or end time when
rebuilding the index of a data file.
## 2024.066
- Ignore packets with invalid start and/or end time
- Fix rifftool with respect to checking data files with
check command: ignore invalid times.
- Add corrupted record and chunk count to chunks command
of rifftool.
## 2024.051
- Fix frontend storage time per second scale units
- Fix frontend real time channel display update
- Fix overview plot update when locking time time range
## 2024.047
- Update frontend
## 2024.024
- Update frontend
## 2024.022
- Add support for additional web applications to be integrated
into the web frontend
## 2023.355
- Update web frontend
- Close menu on channels page on mobile screens
if clicked outside the menu
## 2023.354
- Update web frontend
- Improve rendering on mobile devices
## 2023.353
- Update web frontend
- Server statistics is now the default page
- The plot layout sticks the time scale to the bottom
- Bug fixes
## 2023.348
- Add support for `info server modified after [timestamp]`
- Update web frontend
## 2023.347
- Some more internal optimizations
## 2023.346
- Fix bug in basic auth implementation that caused all clients to disconnect
when the configuration was reloaded.
## 2023.331
- Correct system write time metrics
## 2023.328
- Extend notification measuring
## 2023.327
- Fix crash with `--read-only`.
- Improve input rate performance with many connected clients.
## 2023.326
- Internal optimization: distribute notification handling across multiple
CPUs to speed up handling many connections (> 500).
- Add notification time to storage time plot
## 2023.325
- Internal optimization: compile client session decoupled from notification
loop.
## 2023.321
- Decouple data disc storage from client notifications. This will increase
performance if many real-time clients are connected. A new parameter has
been added to control the size of the notification queue:
`AS.filebase.params.q`. The default value is 1000.
## 2023.320
- Add file storage optmization which might be useful if dealing with a large
amount of channels. In particular `AS.filebase.params.writeMetaOnClose` and
`AS.filebase.params.alignIndexPages` have been added in order to reduce the
I/O bandwidth.
- Add write thread priority option. This requires the user who is running
CAPS to be able to set rtprio, see limits.conf.
## 2023.312
- Do not block if inventory is being reloaded
## 2023.311
- Add average physical storage time metric
## 2023.299
- Fix storage time statistics in combination with client requests
- Improve statistics plot in web frontend
## 2023.298
- Add storage time per package to statistics
## 2023.241
- Fix protocol orchestration for plugins in combination with authentication
## 2023.170
- Add section on retrieval of data availability to documentation
## 2023.151
- Fix crash in combination with invalid HTTP credential format
## 2023.093
- Add note to documentation that inventory should be enabled in combination
with WWS for full support.
## 2023.062
- Add documentation of rifftool which is available through the separate
package 'caps-tools'.
## 2023.055
- Internal cleanups
## 2023.024
- Fix crash if requested heli filter band is out of range
- Improve request logging for heli requests
## 2023.011
### Changed
- Change favicon, add SVG and PNG variants
## 2023.011
### Fixed
- Client connection statistics
## 2023.010
### Fixed
- Crash in combination with websocket data connections
## 2023.004
### Fixed
- Reload operation with respect to access changes. Recent versions
crashed under some circumstances.
## 2022.354
### Added
- Show connection statistics in the frontend
## 2022.349
### Changed
- Improved read/write schedular inside CAPS to optimize towards
huge number of clients
## 2022.346
### Fixed
- Fixed statistics calculation in `--read-only` mode.
## 2022.342
### Added
- Read optional index files per archive directory during startup which
allow to skip scanning the directory and only rely on the index information.
This can be useful if mounted read-only directories should be served and
skipped from possible scans to reduce archive scan time.
## 2022.341
### Changed
- Improve start-up logging with respect to archive scanning and setup.
All information go to the notice level and will be logged irrespective
of the set log level.
- Add configuration option to define the path of the archive log file,
`AS.filebase.logFile`.
## 2022.334
### Fixed
- Fixed bug which prevented forwarding of new channels in combination
with wildcard requests.
## 2022.333
### Changed
- Improve websocket implementation
## 2022.332
### Changed
- Increase reload timeout from 10 to 60s
## 2022.327
### Fixed
- Fixed invalid websocket frames sent with CAPS client protocol
- Fixed lag in frontend when a channel overview reload is triggered
## 2022.322
### Added
- Added system error message if a data file cannot be created.
- Try to raise ulimit to at least cached files plus opened files
and terminate if that was not successful.
## 2022.320
### Fixed
- Fixed storage of overlapping raw records which overlap with
gaps in a data file.
## 2022.314
### Fixed
- Fixed trimming of raw records while storing them. If some
samples were trimmed then sometimes raw records were merged
although the do not share a common end and start time.
## 2022.307
### Fixed
- Fixed deadlock in combination with server info queries
## 2022.284
### Fixed
- Fixed segment resolution evaluation in frontend
## 2022.278
### Fixed
- Fixed memory leak in combination with some gap requests
## 2022.269
### Fixed
- Memory leak in combination with request logs.
### Changed
- Removed user `FDSNWS` in order to allow consistent permissions
with other protocols. The default anonymous access is authenticated
as guest. Furthermore HTTP Basic Authentication can be used to
authenticate an regular CAPS user although that is not part of the
FDSNWS standard. This is an extension of CAPS.
If you have set up special permission for the FDSNWS user then you
have to revise them.
The rationale behind this change is (as stated above) consistency.
Furthermore the ability to configure access based on IP addresses
drove that change. If CAPS authenticates any fdsnws request as
user `FDSNWS` then IP rules are not taken into account. Only
anonymous requests are subject to IP based access rules. We do not
believe that the extra `FDSNWS` user added any additional security.
## 2022.265
### Fixed
- Crash in combination with MTIME requests.
## 2022.262
### Added
- Added modification time filter to stream requests. This allows to
request data and segment which were available at a certain time.
## 2022-09-06
### Improved
- Improved frontend performance with many thousands of channels and
high segmentation.
### Fixed
- Fixed time window trimming of raw records which prevented data delivery
under some very rare circumstances.
## 2022-09-02
### Added
- List RESOLUTION parameter in command list returned by HELP on client
interface.
## 2022-08-25
### Changed
- Allow floating numbers for slist format written by capstool.
## 2022-08-25
### Important
- Serve WebSocket requests via the regular HTTP interface. The
configuration variables `AS.WS.port` and `AS.WS.SSL.port` have
been removed. If WebSocket access is not desired then the HTTP
interface must be disabled.
- Reworked the HTTP frontend which now provides display of channel segments,
cumulative station and network views and a view with multiple traces.
- In the reworked frontend, the server statistics are only available to users
which are member of the admin group as defined by the access control file
configured in `AS.auth.basic.users.passwd`.
## 2022-08-16
### Added
- Open client files read-only and only request write access if the index
needs to be repaired or other maintenance operations must be performed.
This makes CAPS work on a read-only mounted file system.
## 2022-07-12
### Fixed
- Fixed HELI request with respect to sampling rate return value.
It returned the underlying stream sampling rate rather than 1/1.
## 2022-06-10
### Fixed
- Improve bad chunk detection in corrupt files. Although CAPS is
pretty stable when it comes to corrupted files other tools might
not. This improvement will trigger a file repair if a bad chunk
has been detected.
## 2022-06-07
### Fixed
- Infinite loop if segments with resolution >= 1 were requested.
## 2022-05-30
### Added
- Add "info server" request to query internal server state.
## 2022-05-18
### Fixed
- Fix possible bug in combination with websocket requests. The
issue exhibits as such as the connection does not respond anymore.
Closing and reopening the connection would work.
## 2022-05-09
### Added
- Add gap/segment query.
## 2022-04-26
### Important
- With this release we have split the server and the tools
- riffdump
- riffsniff
- rifftest
- capstool
into separate packages. We did this because for some use cases
it make sense to install only these tools. The new package is
called `caps-tools` and activated for all CAPS customers.
## 2022-03-28
### Changed
- Update command-line help for capstool.
## 2022-03-03
### Added
- Log plugin IP and port on accept.
- Log plugin IP and port on package store error.
## 2021-12-20
### Added
- Explain record sorting in capstool documentation.
## 2021-11-09
### Fixed
- Fixed helicorder request in combination with filtering. The
issue caused wrong helicorder min/max samples to be returned.
## 2021-10-26
### Fixed
- Fixed data extraction for the first record if it does not
intersect with the requested time window.
## 2021-10-19
### Changed
- Update print-access help page entry
- Print help page in case of unrecognized command line options
### Fixed
- Do not print archive stats when the help page or version information is
requested
## 2021-09-20
### Fixed
- Fixed crash if an FDSNWS request with an empty compiled channel list has been
made
## 2021-09-17
### Added
- New config option `AS.filebase.purge.referenceTime` defining which reference
time should be used while purge run. Available are:
- EndTime: The purge run uses the end time per stream as reference time.
- Now: The purge run uses the current time as reference time.
By default the purge operation uses the stream end time as reference time.
To switch to **Now** add the following entry to the caps configuration.
```config
AS.filebase.purge.referenceTime = Now
```
## 2021-05-03
### Changed
- Log login and logout attempts as well as blocked stream requests to request
log.
- Allow whitespaces in passwords.
## 2021-04-15
### Fixed
- Rework CAPS access rule evaluation.
### Changed
- Comprehensive rework of CAPS authentication feature documentation.
## 2021-03-11
### Important
- Reworked data file format. An high performance index has been added to the
data files which require an conversion of the data files. See CAPS
documentation about upgrading. The conversion is done transparently in the
background but could affect performance while the conversion is in progress.
## 2020-10-12
### Added
- Provide documentation of the yet2caps plugin.
## 2020-09-04
### Fixed
- Fixed gaps in helicorder request.
## 2020-07-01
### Fixed
- Don't modify stream start time if the assoicated data file
couldn't deleted while purge run. This approach makes sure that
stream start time and the data files are kept in sync.
## 2020-02-24
### Added
- Extended purge log. The extended purge log can be enabled with
the configuration parameter `AS.logPurge`. This feature is not enabled
by default.
### Changed
- Log maximum number of days to keep data per stream at start.
## 2020-01-27
### Fixed
- Typo in command line output.
## 2019-11-26
### Added
- Added new command line option `configtest that runs a
configuration file syntax check. It parses the configuration
files and either reports Syntax OK or detailed information
about the particular syntax error.
- Added Websocket interface which accepts HTTP connections
(e.g. from a web browser) and provides the CAPS
protocol via Websockets. An additional configuration will
be necessary:
```config
AS.WS.port = 18006
# Provides the Websocket interface via secure sockets layer.
# The certificate and key used will be read from
# AS.SSL.certificate and AS.SSL.key.
AS.WS.SSL.port = 18007
```
### Changed
- Simplified the authorization configuration. Instead of using one
login file for each CAPS interface we read the authentication
information from a shadow file. The file contains one line
per user where each line is of format &quot;username:encrypted_pwd&quot;.
To encrypt a password mkpasswd can be used. It is recommended to
apply a strong algorithm such as sha-256 or sha-512. The command
&quot;user=sysop pw=`mkpasswd -m sha-512` &amp;&amp; echo $user:$pw&quot;
would generate a line for e.g. user &quot;sysop&quot;. The shadow
file can be configured with the config option `AS.users.shadow`.
Example:
```config
# The username is equal to the password
test:$6$jHt4SqxUerU$pFTb6Q9wDsEKN5yHisPN4g2PPlZlYnVjqKFl5aIR14lryuODLUgVdt6aJ.2NqaphlEz3ZXS/HD3NL8f2vdlmm0
user1:$6$mZM8gpmKdF9D$wqJo1HgGInLr1Tmk6kDrCCt1dY06Xr/luyQrlH0sXbXzSIVd63wglJqzX4nxHRTt/I6y9BjM5X4JJ.Tb7XY.d0
user2:$6$zE77VXo7CRLev9ly$F8kg.MC8eLz.DHR2IWREGrSwPyLaxObyfUgwpeJdQfasD8L/pBTgJhyGYtMjUR6IONL6E6lQN.2QLqZ5O5atO/
```
In addition to user authentication user access control properties are defined
in a passwd file. It can be configured with the config option
`AS.users.passwd`. Each line of the file contains a user name or a group
id and a list of properties in format &quot;username:prop1,prop2,prop3&quot;.
Those properties are used to grant access to certain functionalities.
Currently the following properties are supported by CAPS: read, write.:
&quot;read and write.&quot;.
By default a anonymous user with read and write permissions exists. Groups use
the prefix **%** so that they are clearly different from users.
Example:
```config
user1: read,write
%test: read
```
The group file maps users to different groups. Each line of the file maps
a group id to a list of user names. It can be configured with the config
option `AS.users.group`.
Example:
```config
test: user2
```
With the reserved keyword **ALL** a rule will be applied to all users.
Example:
```config
STATIONS.DENY = all
STATIONS.AM.ALLOW = user1
```
- We no longer watch the status of the inventory and the access file with
Inotify because it could be dangerous in case of an incomplete saved
configuration. A reload of the configuration can be triggered by sending a
SIGUSR1 signal to the CAPS process. Example:
```bash
kill -SIGUSR1 <pid>
```
CAPS reloads the following files, if necessary:
- shadow,
- passwd,
- access list,
- inventory.
## 2019-10-15
### Changed
- Run archive clean up after start and every day at midnight(UTC).
## 2019-10-01
### Changed
- Increase shutdown timeout to 60 s.
## 2019-05-08
### Fixed
- Fixed potential deadlock in combination with inventory updates.
## 2019-04-23
### Fixed
- Improved plugin data scheduling which could have caused increased delays
of data if one plugin transmits big amounts of data through a low latency
network connection, e.g. localhost.
## 2019-04-08
### Added
- Added new config option `AS.filebase.purge.initIdleTime` that
allows to postpone the initial purge process up to n seconds. Normally
after a start the server tries to catch up all data which
might be an IO intensive operation. In case of a huge archive the purge
operation slow downs the read/write performance of the system too. To
reduce the load at start it is a good idea to postpone this operation.
## 2019-03-29
### Added
- Added index file check during archive scan and rebuild them
if corrupt. The lack of a check sometimes caused CAPS to
freeze while starting up.
## 2018-12-11
### Added
- Added support for SC3 schema 0.11.
## 2018-10-18
### Fixed
- Spin up threads correctly in case of erroneous configuration
during life reconfiguration.
## 2018-10-17
### Fixed
- Reinitalize server ports correctly after reloading the access list. This
was not a functional bug, only a small memory leak.
## 2018-09-14
### Fixed
- High IO usage while data storage purge. In worst case the purge operation
could slow down the complete system so that incoming packets could not be
handled anymore.
## 2018-09-05
### Added
- Access rule changes do not require a restart of the server anymore.
## 2018-08-29
### Changed
- Assigned human readable descriptions to threads. Process information tools
like top or htop can display this information.
## 2018-08-08
### Changed
- Reduced server load for real-time client connections.
## 2018-05-30
### Fixed
- Fixed unexpected closed SSL connections.
## 2018-05-25
### Fixed
- Fixed high load if many clients request many streams in real-time.
## 2018-05-18
### Added
- Add option to log anonymous IP addresses.
## 2018-04-17
### Fixed
- Improved handling of incoming packets to prevent packet loss to subscribed
sessions in case of heavy load.
## 2018-03-08
### Fixed
- Fixed access list evaluator. Rather than replacing general rules with concrete
rules they are now merged hierarchically.
## 2018-02-13
### Added
- Restrict plugin stream codes to [A-Z][a-z][0-9][-_] .
## 2018-01-31
### Changed
- CAPS archive log will be removed at startup and written at shutdown. With
this approach we want to force a rescan of the complete archive in case of
an unexpected server crash.
## 2018-01-30
### Fixed
- Fixed parameter name if HTTP SSL port, which should be `AS.http.SSL.port`
but was `AS.SSL.http.port`.
## 2018-01-29
### Fixed
- Fixed caps protocol real time handler bug which caused gaps on client-side
when retrieving real time data.
## 2018-01-26
### Changed
- Log requests per CAPS server instance.
### Fixed
- Improved data scheduler to hopefully prevent clients from stalling the
plugin input connections.
## 2018-01-02
### Fixed
- Fixed bug in combination with SSL connections that caused CAPS to not
accept any incoming connections after some time.
## 2017-11-15
### Added
- Added option `AS.inventory` which lets CAPS read an SC3 inventory XML
file to be used together with WWS requests to populate channel geo locations
which will enable e.g. the map feature in Swarm.
## 2017-11-14
### Fixed
- Data store start time calculation in case of the first record start time is
greater than the requested one.
## 2017-11-08
### Fixed
- WWS Heli request now returns correct timestamps for data with gaps.
## 2017-10-13
### Fixed
- FDSN request did not return the first record requested.
## 2017-08-30
### Fixed
- Segmentation fault caused by invalid FDSN request.
- Timing bug in the CAPS WWS protocol implementation.
## 2017-06-15
### Added
- Add `AS.minDelay` which delays time window requests for the specified
number of seconds. This parameter is only effective with FDSNWS and WWS.
## 2017-05-30
### Feature
- Add experimental Winston Wave Server(WWS) support. This feature is disabled
by default.
## 2017-05-09
### Feature
- Add FDSNWS dataselect support for archives miniSEED records. This
support is implicitely enabled if HTTP is activated.
## 2017-05-03
### Feature
- Support for SSL and authentication in AS, client and HTTP transport.
## 2017-03-24
### Fixed
- MSEED support.
## 2017-03-09
### Changed
- Moved log output that the index was reset and that an incoming
record has not ignored to debug channel.
## 2016-06-14
### Added
- Added option `AS.clientBufferSize` to configure the buffer
size for each client connection. The higher the buffer size
the better the request performance.
## 2016-06-09
### Added
- Added out-of-order requests for clients. The rsas plugin with
version >= 0.6.0 supports requesting out-of-order packets with
parameter `ooo`, e.g. `caps://localhost?ooo`.
- Improved record insertion speed with out-of-order records.
## 2016-03-09
### Fixed
- Low packet upload rate.

View File

@ -0,0 +1,108 @@
# Change Log
All notable changes to sproc2caps will be documented in this file.
## 2024.351
### Fixed
- Compatibility with upcoming SeisComP release
## 2024.262
### Fixed
- MiniSEED encoding allows half a sample timing tolerance
to detect contiguous records.
## 2024.257
### Fixed
- Memory leak
## 2024.234
### Fixed
- Output sampling rate when input sampling rate is a fraction
## 2024.233
### Added
- The option `--stop` terminates the data processing when the data input and
processing is complete.
## 2023.225
### Changed
- Make stream map reading slightly more error-tolerant
## 2023.289
### Fixed
- When using a stream as input several times just the last registered
stream was used.
## 2023.151
### Fixed
- Inventory loading from file
## 2021-04-29
### Added
- Add SSL and authentication support for the output connection.
With this version the data output URL can be set with the
config option ``output.address``. The formal definition
of the field is: [[caps|capss]://][user:pass@]host[:port] e.g.
```
output.address = capss://caps:caps@localhost:18003
```
The new output.address parameter superseds the output.host and
output.port parameter of previous versions and takes precedence.
The old parameters are kept for compatibility reasons but are
marked as deprecated.
## 2021-04-27
### Fixed
- Expression handling. So far it was not possible to
overwrite expressions on stream level.
## 2020-04-07
### Fixed
- Sequential rules where the result stream is the input of another rule
## 2020-04-06
### Changed
- Support to set expression for each stream independently. If the expression
is omitted the expression configured in `streams.expr` is used.
```
XX.TEST1..HHZ XX.TEST2..HHZ XX.TEST3..HHZ?expr=x1+x2
```
## 2020-02-17
### Changed
- Increase default timeout for acknowledgement messages from 5s to 60s
## 2019-11-25
### Added
- Documentation

View File

@ -0,0 +1,355 @@
.. |nbsp| unicode:: U+00A0
.. |tab| unicode:: U+00A0 U+00A0 U+00A0 U+00A0
.. _sec-archive:
Data Management
***************
:term:`CAPS` uses the :term:`SDS` directory
structure for its archives shown in figure :num:`fig-archive`. SDS organizes
the data in directories by year, network, station and channel.
This tree structure eases archiving of data. One complete year may be
moved to an external storage, e.g. a tape library.
.. _fig-archive:
.. figure:: media/sds.png
:width: 12cm
SDS archive structure of a CAPS archive
The data are stored in the channel directories. One file is created per sensor
location for each day of the year. File names take the form
:file:`$net.$sta.$loc.$cha.$year.$yday.data` with
* **net**: network code, e.g. 'II'
* **sta**: station code, e.g. 'BFO'
* **loc**: sensor location code, e.g. '00'. Empty codes are supported
* **cha**: channel code, e.g. 'BHZ'
* **year**: calender year, e.g. '2021'
* **yday**: day of the year starting with '000' on 1 January
.. note ::
In contrast to CAPS archives, in SDS archives created with
`slarchive <https://docs.gempa.de/seiscomp/current/apps/slarchive.html>`_
the first day of the year, 1 January, is referred to by index '001'.
.. _sec-caps-archive-file-format:
File Format
===========
:term:`CAPS` uses the `RIFF
<http://de.wikipedia.org/wiki/Resource_Interchange_File_Format>`_ file format
for data storage. A RIFF file consists of ``chunks``. Each chunk starts with a 8
byte chunk header followed by data. The first 4 bytes denote the chunk type, the
next 4 bytes the length of the following data block. Currently the following
chunk types are supported:
* **SID** - stream ID header
* **HEAD** - data information header
* **DATA** - data block
* **BPT** - b-tree index page
* **META** - meta chunk of the entire file containing states and a checksum
Figure :num:`fig-file-one-day` shows the possible structure of an archive
file consisting of the different chunk types.
.. _fig-file-one-day:
.. figure:: media/file_one_day.png
:width: 18cm
Possible structure of an archive file
SID Chunk
---------
A data file may start with a SID chunk which defines the stream id of the
data that follows in DATA chunks. In the absence of a SID chunk, the stream ID
is retrieved from the file name.
===================== ========= =====================
content type bytes
===================== ========= =====================
id="SID" char[4] 4
chunkSize int32 4
networkCode + '\\0' char* len(networkCode) + 1
stationCode + '\\0' char* len(stationCode) + 1
locationCode + '\\0' char* len(locationCode) + 1
channelCode + '\\0' char* len(channelCode) + 1
===================== ========= =====================
HEAD Chunk
----------
The HEAD chunk contains information about subsequent DATA chunks. It has a fixed
size of 15 bytes and is inserted under the following conditions:
* before the first data chunk (beginning of file)
* packet type changed
* unit of measurement changed
===================== ========= ========
content type bytes
===================== ========= ========
id="HEAD" char[4] 4
chunkSize (=7) int32 4
version int16 2
packetType char 1
unitOfMeasurement char[4] 4
===================== ========= ========
The ``packetType`` entry refers to one of the supported types described in
section :ref:`sec-packet-types`.
DATA Chunk
----------
The DATA chunk contains the actually payload, which may be further structured
into header and data parts.
===================== ========= =========
content type bytes
===================== ========= =========
id="DATA" char[4] 4
chunkSize int32 4
data char* chunkSize
===================== ========= =========
Section :ref:`sec-packet-types` describes the currently supported packet types.
Each packet type defines its own data structure. Nevertheless :term:`CAPS`
requires each type to supply a ``startTime`` and ``endTime`` information for
each record in order to create seamless data streams. The ``endTime`` may be
stored explicitly or may be derived from ``startTime``, ``chunkSize``,
``dataType`` and ``samplingFrequency``.
In contrast to a data streams, :term:`CAPS` also supports storing of individual
measurements. These measurements are indicated by setting the sampling frequency
to 1/0.
BPT Chunk
---------
BPT chunks hold information about the file index. All data records are indexed
using a B+ tree. The index key is the tuple of start time and end time of each
data chunk to allow very fast time window lookup and to minimize disc accesses.
The value is a structure and holds the following information:
* File position of the format header
* File position of the record data
* Timestamp of record reception
This chunk holds a single index tree page with a fixed size of 4kb
(4096 byte). More information about B+ trees can be found at
https://en.wikipedia.org/wiki/B%2B_tree.
META Chunk
----------
Each data file contains a META chunk which holds information about the state of
the file. The META chunk is always at the end of the file at a fixed position.
Because CAPS supports pre-allocation of file sizes without native file system
support to minimize disc fragmentation it contains information such as:
* effectively used bytes in the file (virtual file size)
* position of the index root node
* the number of records in the file
* the covered time span
and some other internal information.
.. _sec-optimization:
Optimization
============
After a plugin packet is received and before it is written to disk,
:term:`CAPS` tries to optimize the file data in order reduce the overall data
size and to increase the access time. This includes:
* **merging** data chunks for continuous data blocks
* **splitting** data chunks on the date limit
* **trimming** overlapped data
Merging of Data Chunks
----------------------
:term:`CAPS` tries to create large continues blocks of data by reducing the
number of data chunks. The advantage of large chunks is that less disk space is
occupied by data chunk headers. Also seeking to a particular time stamp is
faster because less data chunk headers need to be read.
Data chunks can be merged if the following conditions apply:
* merging is supported by packet type
* previous data header is compatible according to packet specification, e.g.
``samplingFrequency`` and ``dataType`` matches
* ``endTime`` of last record equals ``startTime`` of new record (no gap)
Figure :num:`fig-file-merge` shows the arrival of a new plugin packet. In
alternative A) the merge failed and a new data chunk is created. In alternative B)
the merger succeeds. In the latter case the new data is appended to the existing
data block and the original chunk header is updated to reflect the new chunk
size.
.. _fig-file-merge:
.. figure:: media/file_merge.png
:width: 18cm
Merging of data chunks for seamless streams
Splitting of Data Chunks
------------------------
Figure :num:`fig-file-split` shows the arrival of a plugin packet containing
data of 2 different days. If possible, the data is split on the date limit. The
first part is appended to the existing data file. For the second part a new day
file is created, containing a new header and data chunk. This approach ensures
that a sample is stored in the correct data file and thus increases the access
time.
Splitting of data chunks is only supported for packet types providing the
``trim`` operation.
.. _fig-file-split:
.. figure:: media/file_split.png
:width: 18cm
Splitting of data chunks on the date limit
Trimming of Overlaps
--------------------
The received plugin packets may contain overlapping time spans. If supported by
the packet type :term:`CAPS` will trim the data to create seamless data streams.
.. _sec-packet-types:
Packet Types
============
:term:`CAPS` currently supports the following packet types:
* **RAW** - generic time series data
* **ANY** - any possible content
* **MiniSeed** - native :term:`MiniSeed`
.. _sec-pt-raw:
RAW
---
The RAW format is a lightweight format for uncompressed time series data with a
minimal header. The chunk header is followed by a 16 byte data header:
============================ ========= =========
content type bytes
============================ ========= =========
dataType char 1
*startTime* TimeStamp [11]
|tab| year int16 2
|tab| yDay uint16 2
|tab| hour uint8 1
|tab| minute uint8 1
|tab| second uint8 1
|tab| usec int32 4
samplingFrequencyNumerator uint16 2
samplingFrequencyDenominator uint16 2
============================ ========= =========
The number of samples is calculated by the remaining ``chunkSize`` divided by
the size of the ``dataType``. The following data types value are supported:
==== ====== =====
id type bytes
==== ====== =====
1 double 8
2 float 4
100 int64 8
101 int32 4
102 int16 2
103 int8 1
==== ====== =====
The RAW format supports the ``trim`` and ``merge`` operation.
.. _sec-pt-any:
ANY
---
The ANY format was developed to store any possible content in :term:`CAPS`. The chunk
header is followed by a 31 byte data header:
============================ ========= =========
content type bytes
============================ ========= =========
type char[4] 4
dataType (=103, unused) char 1
*startTime* TimeStamp [11]
|tab| year int16 2
|tab| yDay uint16 2
|tab| hour uint8 1
|tab| minute uint8 1
|tab| second uint8 1
|tab| usec int32 4
samplingFrequencyNumerator uint16 2
samplingFrequencyDenominator uint16 2
endTime TimeStamp 11
============================ ========= =========
The ANY data header extends the RAW data header by a 4 character ``type``
field. This field is indented to give a hint on the stored data. E.g. an image
from a Web cam could be announced by the string ``JPEG``.
Since the ANY format removes the restriction to a particular data type, the
``endTime`` can no longer be derived from the ``startTime`` and
``samplingFrequency``. Consequently the ``endTime`` is explicitly specified in
the header.
Because the content of the ANY format is unspecified it neither supports the
``trim`` nor the ``merge`` operation.
.. _sec-pt-miniseed:
MiniSeed
--------
`MiniSeed <http://www.iris.edu/data/miniseed.htm>`_ is the standard for the
exchange of seismic time series. It uses a fixed record length and applies data
compression.
:term:`CAPS` adds no additional header to the :term:`MiniSeed` data. The
:term:`MiniSeed` record is directly stored after the 8-byte data chunk header.
All meta information needed by :term:`CAPS` is extracted from the
:term:`MiniSeed` header. The advantage of this native :term:`MiniSeed` support
is that existing plugin and client code may be reused. Also the transfer and
storage volume is minimized.
Because of the fixed record size requirement neither the ``trim`` nor the
``merge`` operation is supported.
.. TODO:
\subsection{Archive Tools}
\begin{itemize}
\item {\tt\textbf{riffsniff}} --
\item {\tt\textbf{rifftest}} --
\end{itemize}

View File

@ -0,0 +1,3 @@
.. _sec-changelog-caps2caps:
.. mdinclude:: CHANGELOG-caps.md

View File

@ -0,0 +1,3 @@
.. _sec-changelog-python:
.. mdinclude:: CHANGELOG-python.md

View File

@ -0,0 +1,3 @@
.. _sec-changelog-rs2caps:
.. mdinclude:: CHANGELOG-rs.md

View File

@ -0,0 +1,3 @@
.. _sec-changelog-server:
.. mdinclude:: CHANGELOG-server.md

View File

@ -0,0 +1,3 @@
.. _sec-changelog-slink2caps:
.. mdinclude:: CHANGELOG-seedlink.md

View File

@ -0,0 +1,3 @@
.. _sec-changelog-sproc:
.. mdinclude:: CHANGELOG-sproc.md

Some files were not shown because too many files have changed in this diff Show More