[installation] Init with inital config for global

This commit is contained in:
2025-10-30 15:08:17 +01:00
commit 7640b452ed
3678 changed files with 2200095 additions and 0 deletions

View File

@ -0,0 +1,132 @@
# Change Log
## 2025.101
### Fixed
- Improved performance significantly when requesting many channels from
an upstream caps server.
## 2025.069
### Changed
- Ported code to latest SeisComP API 17 and fix deprecation
warnings.
## 2024.262
### Fixed
- MiniSEED encoding allows half a sample timing tolerance
to detect contiguous records.
## 2023.234
### Added
- Command-line help and more module documentation.
## 2023.257
### Added
- New configuration option `maxRealTimeGap` and `marginRealTimeGap`.
They allow to configure a dedicated backfilling stream and to
prefer real-time data. The consequence is the reception of
out of order records at clients.
## 2022-02-28
### Added
- New config option `timeWindowUpdateInterval`. This option
sets the interval in seconds at which the relative request
time window defined by option `days` and/or `daysBefore` is
updated. Use a value less or equal zero to disable the update.
This feature is supported in archive mode only.
A typical use case is when data has to be transmitted
continuously with a time delay.
```bash
timeWindowUpdateInterval=86400
```
## 2022-02-25
### Fixed
- Wrong time window subscription after reconnect
## 2020-12-22
### Added
- Configuration description for daysBefore in e.g. scconfig
## 2020-12-17
### Added
- New config option `daysBefore` which can be used to set the end time
of the data acquisition time window n days before the current time, e.g.,
``` bash
daysBefore=10
```
## 2020-02-17
### Changed
- Increase default timeout for acknowledgement messages from 5s to 60s
- Use microsecond precision in data requests
## 2020-02-12
### Added
- Backfilling buffer which is a tool to mitigate out-of-order data. Whenever a
gap is detected, records will be held in a buffer and not sent out. Records
are flushed from front to back if the buffer size is exceeded.
## 2020-02-10
### Changed
- Subscribe to streams even if the requested end time is before the last
received timestamp. This is necessary to do not request data again in
case of wildcard requests.
## 2018-08-05
### Fixed
- segfault in journal file parser
- corrupt journal files
## 2018-03-19
### Added
- SSL support for outgoing connections
## 2018-03-14
### Fixed
- The journal file will be stored by default at @ROOTDIR@/var/run/[name]/journal
where name is the name of the application. In standard cases it is ```caps2caps```
but not with aliases in use.
## 2017-03-21
### Fixed
- stream recovery in case of wild card request
## 2017-02-14
### Added
- out-of-order support

View File

@ -0,0 +1,58 @@
# Change Log
All notable changes to the python plugins will be documented in this file.
## 2024.262
### Fixed
- MiniSEED encoding allows half a sample timing tolerance
to detect contiguous records.
## 2024.005
### Added
- data2caps
- Document the processing of SLIST files with multiple data blocks.
## 2023.317
### Changed
- image2caps
- Enforce Python3
## 2023.298
### Added
- data2caps
- Allow setting the network code explicitly by --network`.
### Changed
- data2caps
- Read the sample rate numerator and denominator separately instead of
assuming denominator = 1.
- For format unavco 1.0 the network code must be given explicitly.
### Fixed
- data2caps
- Send data in unavco data format which was not done before.
## 2023.255
### Added
- data2caps
- Renamed from raw2caps.
- Support reading slist files, add documentation.
- Support reading strain & seismic data files from www.unavco.org.

View File

@ -0,0 +1,146 @@
# Change Log
All notable changes to rs plugin will be documented in this file.
## 2025.051
### Added
- Option `days` that allows to set the start time of the data
time window n days before the current time., e.g.,
``` bash
days = 1
```
- Option `daysBefore` that allows to set the end time of the data
time window n days before the current time., e.g.,
``` bash
daysBefore = 1
```
## 2024.262
### Fixed
- MiniSEED encoding allows half a sample timing tolerance
to detect contiguous records.
## 2024.173
### Added
- Config option `streams.passthrough`. Until now, the feature could only
be activated via a command line option.
## 2024.156
### Important
- The command-line option `--addr`/`-a` has been renamed to
`--output`/`-O` in order to be consistent with other applications like
caps2caps. Scripts/processes using this parameter must be adjusted.
## 2023.254
### Added
- Make `output.maxFutureEndTime` configurable in scconfig.
## 2023.135
### Fixed
- Inventory subscription
## 2022.332
### Added
- Add poll mode for non-real-time inputs, e.g. fdsnws.
## 2021-04-29
### Added
- Add SSL and authentication support for the output connection.
With this version the data output URL can be set with the
config option ``output.address``. The formal definition
of the field is: [[caps|capss]://][user:pass@]host[:port] e.g.
```
output.address = capss://caps:caps@localhost:18003
```
The new output.address parameter superseds the output.host and
output.port parameter of previous versions and takes precedence.
The old parameters are kept for compatibility reasons but are
marked as deprecated.
## 2020-02-17
### Changed
- Increase default timeout for acknowledgement messages from 5s to 60s
## 2020-01-23
### Fixed
- Make init script Python 2 and 3 compatible
## 2019-08-07
### Added
- plugin version information
## 2019-01-30
### Fixed
- Loading inventory from file
## 2018-12-17
### Added
- Added new option ``--status-log``. With this option enabled
the plugin writes status information e.g. the number of bytes
buffered into a seperate log file ``@LOGDIR@/rs2caps-stats.log``.
## 2018-01-24
### Added
- Optimized config script in combination with high station count
## 2018-01-17
### Added
- Added optional bindings to synchronize the journal file with
- Added option to synchronize journal file with bindings
## 2016-06-08
### Added
- Added option ``--passthrough`` which will not read the inventory from
the database and thus does not require a database connection and it will
not subscribe to any stream at the recordstream. Instead it will process
everything that it receives. This is most useful in combination with files.
## 2016-06-03
### Added
- backfilling support
## 2016-05-25
### Added
- support to load inventory from file or database. The configuration may be
adopted using the standard SeisComP3 options.

View File

@ -0,0 +1,75 @@
# Change Log
All notable changes to the slink2caps plugin will be documented in this file.
## 2023.254
### Added
- Make `output.maxFutureEndTime` configurable in scconfig.
## 2022.340
### Added
- Support all Seedlink 3.3 features which includes plugin proc
definitions
## 2022-05-23
### Added
- CAPS authentication support for outgoing connections.
Use the config option ``output.address`` to provide
the data output URL in the form
[[caps|capss]://][user:pass@]host[:port], e.g.:
```
output.address = caps://user:pass@localhost:18003
```
## 2022-04-14
### Changed
- Transient packets will be written to disk during shutdown to
prevent packet loss
## 2022-04-07
### Fixed
- Shutdown in case of no data could be sent to CAPS
## 2022-03-03
### Fixed
- Fixed usage of `output.recordBufferSize` which hasn't been used yet
- Set default buffer size to 128k
## 2021-11-23
### Fixed
- First shutdown plugins and then stop caps connection to avoid lost
records during shutdown
## 2019-09-24
### Fixed
- Flush all transient packages before closing the connection to CAPS at exit
## 2019-05-06
### Fixed
- Capturing of SeedLink plugins logs. Under certain conditions the data
acquisition could be affected causing packet loss.
## 2019-03-12
### Added
- Capture SeedLink plugins logs

View File

@ -0,0 +1,997 @@
# Change Log
All notable changes to CAPS will be documented in this file.
Please note that we have changed the date format from year-month-day
to year.dayofyear to be in sync with `caps -V`.
## 2025.232
- Fix data retrieval at the beginning of a year with archive files that start
after the requested start time but at the same day.
## 2025.199
- Fix station lookup in web application v2. This bug lead to stations symbols
placed in a arbitrarily fixed grid and wrong plots.
- Add preferred nodalplane to focalmechanism page in OriginLocatorView v2.
## 2025.135
- Fix datafile header CRC computation.
## 2025.128
- Relax NSLC uppercase requirement for FDSNWS dataselect request.
## 2025.112
- Fix crash in combination with `caps --read-only`.
## 2025.101
- Add option `AS.filebase.params.concurrency` to write to the archive
concurrently multi-threaded. This can improve performance with some
storage technologies such as SSD / NVMe under very high load or with
high latency storage devices such as network connected storages under
moderate load.
- Optimized write performance by reducing and combining page updates.
## 2024.290
- Add Option to purge data via CAPS protocol API. Only users with the `purge`
permission can delete data from archive.
## 2024.269
- Fixed crash on inserting data under some still unclear circumstances.
## 2024.253
- Add more robust checks to detect corrupted files caused by, e.g.
faulty storages or hardware failures/crashes. Corrupt files could have
caused segmentation faults of `caps`.
## 2024.215
- Fix webfrontend bug if `AS.http.fdsnws` is specified. This
bug prevented the webfrontend to load.
## 2024.183
- Add record filter options to rifftool data dump mode
## 2024.151
- Improve logging for plugin port: add IP and port to disconnect
messages and log disconnection requests from the plugin to
INFO level.
## 2024.143
- Fix issue with merging raw records after a restart
## 2024.096
- Attempt to fix dashboard websocket standing connection counter
## 2024.094
- Fix errors when purging a datafile which is still active
## 2024.078
- Ignore records without start time and/or end time when
rebuilding the index of a data file.
## 2024.066
- Ignore packets with invalid start and/or end time
- Fix rifftool with respect to checking data files with
check command: ignore invalid times.
- Add corrupted record and chunk count to chunks command
of rifftool.
## 2024.051
- Fix frontend storage time per second scale units
- Fix frontend real time channel display update
- Fix overview plot update when locking time time range
## 2024.047
- Update frontend
## 2024.024
- Update frontend
## 2024.022
- Add support for additional web applications to be integrated
into the web frontend
## 2023.355
- Update web frontend
- Close menu on channels page on mobile screens
if clicked outside the menu
## 2023.354
- Update web frontend
- Improve rendering on mobile devices
## 2023.353
- Update web frontend
- Server statistics is now the default page
- The plot layout sticks the time scale to the bottom
- Bug fixes
## 2023.348
- Add support for `info server modified after [timestamp]`
- Update web frontend
## 2023.347
- Some more internal optimizations
## 2023.346
- Fix bug in basic auth implementation that caused all clients to disconnect
when the configuration was reloaded.
## 2023.331
- Correct system write time metrics
## 2023.328
- Extend notification measuring
## 2023.327
- Fix crash with `--read-only`.
- Improve input rate performance with many connected clients.
## 2023.326
- Internal optimization: distribute notification handling across multiple
CPUs to speed up handling many connections (> 500).
- Add notification time to storage time plot
## 2023.325
- Internal optimization: compile client session decoupled from notification
loop.
## 2023.321
- Decouple data disc storage from client notifications. This will increase
performance if many real-time clients are connected. A new parameter has
been added to control the size of the notification queue:
`AS.filebase.params.q`. The default value is 1000.
## 2023.320
- Add file storage optmization which might be useful if dealing with a large
amount of channels. In particular `AS.filebase.params.writeMetaOnClose` and
`AS.filebase.params.alignIndexPages` have been added in order to reduce the
I/O bandwidth.
- Add write thread priority option. This requires the user who is running
CAPS to be able to set rtprio, see limits.conf.
## 2023.312
- Do not block if inventory is being reloaded
## 2023.311
- Add average physical storage time metric
## 2023.299
- Fix storage time statistics in combination with client requests
- Improve statistics plot in web frontend
## 2023.298
- Add storage time per package to statistics
## 2023.241
- Fix protocol orchestration for plugins in combination with authentication
## 2023.170
- Add section on retrieval of data availability to documentation
## 2023.151
- Fix crash in combination with invalid HTTP credential format
## 2023.093
- Add note to documentation that inventory should be enabled in combination
with WWS for full support.
## 2023.062
- Add documentation of rifftool which is available through the separate
package 'caps-tools'.
## 2023.055
- Internal cleanups
## 2023.024
- Fix crash if requested heli filter band is out of range
- Improve request logging for heli requests
## 2023.011
### Changed
- Change favicon, add SVG and PNG variants
## 2023.011
### Fixed
- Client connection statistics
## 2023.010
### Fixed
- Crash in combination with websocket data connections
## 2023.004
### Fixed
- Reload operation with respect to access changes. Recent versions
crashed under some circumstances.
## 2022.354
### Added
- Show connection statistics in the frontend
## 2022.349
### Changed
- Improved read/write schedular inside CAPS to optimize towards
huge number of clients
## 2022.346
### Fixed
- Fixed statistics calculation in `--read-only` mode.
## 2022.342
### Added
- Read optional index files per archive directory during startup which
allow to skip scanning the directory and only rely on the index information.
This can be useful if mounted read-only directories should be served and
skipped from possible scans to reduce archive scan time.
## 2022.341
### Changed
- Improve start-up logging with respect to archive scanning and setup.
All information go to the notice level and will be logged irrespective
of the set log level.
- Add configuration option to define the path of the archive log file,
`AS.filebase.logFile`.
## 2022.334
### Fixed
- Fixed bug which prevented forwarding of new channels in combination
with wildcard requests.
## 2022.333
### Changed
- Improve websocket implementation
## 2022.332
### Changed
- Increase reload timeout from 10 to 60s
## 2022.327
### Fixed
- Fixed invalid websocket frames sent with CAPS client protocol
- Fixed lag in frontend when a channel overview reload is triggered
## 2022.322
### Added
- Added system error message if a data file cannot be created.
- Try to raise ulimit to at least cached files plus opened files
and terminate if that was not successful.
## 2022.320
### Fixed
- Fixed storage of overlapping raw records which overlap with
gaps in a data file.
## 2022.314
### Fixed
- Fixed trimming of raw records while storing them. If some
samples were trimmed then sometimes raw records were merged
although the do not share a common end and start time.
## 2022.307
### Fixed
- Fixed deadlock in combination with server info queries
## 2022.284
### Fixed
- Fixed segment resolution evaluation in frontend
## 2022.278
### Fixed
- Fixed memory leak in combination with some gap requests
## 2022.269
### Fixed
- Memory leak in combination with request logs.
### Changed
- Removed user `FDSNWS` in order to allow consistent permissions
with other protocols. The default anonymous access is authenticated
as guest. Furthermore HTTP Basic Authentication can be used to
authenticate an regular CAPS user although that is not part of the
FDSNWS standard. This is an extension of CAPS.
If you have set up special permission for the FDSNWS user then you
have to revise them.
The rationale behind this change is (as stated above) consistency.
Furthermore the ability to configure access based on IP addresses
drove that change. If CAPS authenticates any fdsnws request as
user `FDSNWS` then IP rules are not taken into account. Only
anonymous requests are subject to IP based access rules. We do not
believe that the extra `FDSNWS` user added any additional security.
## 2022.265
### Fixed
- Crash in combination with MTIME requests.
## 2022.262
### Added
- Added modification time filter to stream requests. This allows to
request data and segment which were available at a certain time.
## 2022-09-06
### Improved
- Improved frontend performance with many thousands of channels and
high segmentation.
### Fixed
- Fixed time window trimming of raw records which prevented data delivery
under some very rare circumstances.
## 2022-09-02
### Added
- List RESOLUTION parameter in command list returned by HELP on client
interface.
## 2022-08-25
### Changed
- Allow floating numbers for slist format written by capstool.
## 2022-08-25
### Important
- Serve WebSocket requests via the regular HTTP interface. The
configuration variables `AS.WS.port` and `AS.WS.SSL.port` have
been removed. If WebSocket access is not desired then the HTTP
interface must be disabled.
- Reworked the HTTP frontend which now provides display of channel segments,
cumulative station and network views and a view with multiple traces.
- In the reworked frontend, the server statistics are only available to users
which are member of the admin group as defined by the access control file
configured in `AS.auth.basic.users.passwd`.
## 2022-08-16
### Added
- Open client files read-only and only request write access if the index
needs to be repaired or other maintenance operations must be performed.
This makes CAPS work on a read-only mounted file system.
## 2022-07-12
### Fixed
- Fixed HELI request with respect to sampling rate return value.
It returned the underlying stream sampling rate rather than 1/1.
## 2022-06-10
### Fixed
- Improve bad chunk detection in corrupt files. Although CAPS is
pretty stable when it comes to corrupted files other tools might
not. This improvement will trigger a file repair if a bad chunk
has been detected.
## 2022-06-07
### Fixed
- Infinite loop if segments with resolution >= 1 were requested.
## 2022-05-30
### Added
- Add "info server" request to query internal server state.
## 2022-05-18
### Fixed
- Fix possible bug in combination with websocket requests. The
issue exhibits as such as the connection does not respond anymore.
Closing and reopening the connection would work.
## 2022-05-09
### Added
- Add gap/segment query.
## 2022-04-26
### Important
- With this release we have split the server and the tools
- riffdump
- riffsniff
- rifftest
- capstool
into separate packages. We did this because for some use cases
it make sense to install only these tools. The new package is
called `caps-tools` and activated for all CAPS customers.
## 2022-03-28
### Changed
- Update command-line help for capstool.
## 2022-03-03
### Added
- Log plugin IP and port on accept.
- Log plugin IP and port on package store error.
## 2021-12-20
### Added
- Explain record sorting in capstool documentation.
## 2021-11-09
### Fixed
- Fixed helicorder request in combination with filtering. The
issue caused wrong helicorder min/max samples to be returned.
## 2021-10-26
### Fixed
- Fixed data extraction for the first record if it does not
intersect with the requested time window.
## 2021-10-19
### Changed
- Update print-access help page entry
- Print help page in case of unrecognized command line options
### Fixed
- Do not print archive stats when the help page or version information is
requested
## 2021-09-20
### Fixed
- Fixed crash if an FDSNWS request with an empty compiled channel list has been
made
## 2021-09-17
### Added
- New config option `AS.filebase.purge.referenceTime` defining which reference
time should be used while purge run. Available are:
- EndTime: The purge run uses the end time per stream as reference time.
- Now: The purge run uses the current time as reference time.
By default the purge operation uses the stream end time as reference time.
To switch to **Now** add the following entry to the caps configuration.
```config
AS.filebase.purge.referenceTime = Now
```
## 2021-05-03
### Changed
- Log login and logout attempts as well as blocked stream requests to request
log.
- Allow whitespaces in passwords.
## 2021-04-15
### Fixed
- Rework CAPS access rule evaluation.
### Changed
- Comprehensive rework of CAPS authentication feature documentation.
## 2021-03-11
### Important
- Reworked data file format. An high performance index has been added to the
data files which require an conversion of the data files. See CAPS
documentation about upgrading. The conversion is done transparently in the
background but could affect performance while the conversion is in progress.
## 2020-10-12
### Added
- Provide documentation of the yet2caps plugin.
## 2020-09-04
### Fixed
- Fixed gaps in helicorder request.
## 2020-07-01
### Fixed
- Don't modify stream start time if the assoicated data file
couldn't deleted while purge run. This approach makes sure that
stream start time and the data files are kept in sync.
## 2020-02-24
### Added
- Extended purge log. The extended purge log can be enabled with
the configuration parameter `AS.logPurge`. This feature is not enabled
by default.
### Changed
- Log maximum number of days to keep data per stream at start.
## 2020-01-27
### Fixed
- Typo in command line output.
## 2019-11-26
### Added
- Added new command line option `configtest that runs a
configuration file syntax check. It parses the configuration
files and either reports Syntax OK or detailed information
about the particular syntax error.
- Added Websocket interface which accepts HTTP connections
(e.g. from a web browser) and provides the CAPS
protocol via Websockets. An additional configuration will
be necessary:
```config
AS.WS.port = 18006
# Provides the Websocket interface via secure sockets layer.
# The certificate and key used will be read from
# AS.SSL.certificate and AS.SSL.key.
AS.WS.SSL.port = 18007
```
### Changed
- Simplified the authorization configuration. Instead of using one
login file for each CAPS interface we read the authentication
information from a shadow file. The file contains one line
per user where each line is of format "username:encrypted_pwd".
To encrypt a password mkpasswd can be used. It is recommended to
apply a strong algorithm such as sha-256 or sha-512. The command
"user=sysop pw=`mkpasswd -m sha-512` && echo $user:$pw"
would generate a line for e.g. user "sysop". The shadow
file can be configured with the config option `AS.users.shadow`.
Example:
```config
# The username is equal to the password
test:$6$jHt4SqxUerU$pFTb6Q9wDsEKN5yHisPN4g2PPlZlYnVjqKFl5aIR14lryuODLUgVdt6aJ.2NqaphlEz3ZXS/HD3NL8f2vdlmm0
user1:$6$mZM8gpmKdF9D$wqJo1HgGInLr1Tmk6kDrCCt1dY06Xr/luyQrlH0sXbXzSIVd63wglJqzX4nxHRTt/I6y9BjM5X4JJ.Tb7XY.d0
user2:$6$zE77VXo7CRLev9ly$F8kg.MC8eLz.DHR2IWREGrSwPyLaxObyfUgwpeJdQfasD8L/pBTgJhyGYtMjUR6IONL6E6lQN.2QLqZ5O5atO/
```
In addition to user authentication user access control properties are defined
in a passwd file. It can be configured with the config option
`AS.users.passwd`. Each line of the file contains a user name or a group
id and a list of properties in format "username:prop1,prop2,prop3".
Those properties are used to grant access to certain functionalities.
Currently the following properties are supported by CAPS: read, write.:
"read and write.".
By default a anonymous user with read and write permissions exists. Groups use
the prefix **%** so that they are clearly different from users.
Example:
```config
user1: read,write
%test: read
```
The group file maps users to different groups. Each line of the file maps
a group id to a list of user names. It can be configured with the config
option `AS.users.group`.
Example:
```config
test: user2
```
With the reserved keyword **ALL** a rule will be applied to all users.
Example:
```config
STATIONS.DENY = all
STATIONS.AM.ALLOW = user1
```
- We no longer watch the status of the inventory and the access file with
Inotify because it could be dangerous in case of an incomplete saved
configuration. A reload of the configuration can be triggered by sending a
SIGUSR1 signal to the CAPS process. Example:
```bash
kill -SIGUSR1 <pid>
```
CAPS reloads the following files, if necessary:
- shadow,
- passwd,
- access list,
- inventory.
## 2019-10-15
### Changed
- Run archive clean up after start and every day at midnight(UTC).
## 2019-10-01
### Changed
- Increase shutdown timeout to 60 s.
## 2019-05-08
### Fixed
- Fixed potential deadlock in combination with inventory updates.
## 2019-04-23
### Fixed
- Improved plugin data scheduling which could have caused increased delays
of data if one plugin transmits big amounts of data through a low latency
network connection, e.g. localhost.
## 2019-04-08
### Added
- Added new config option `AS.filebase.purge.initIdleTime` that
allows to postpone the initial purge process up to n seconds. Normally
after a start the server tries to catch up all data which
might be an IO intensive operation. In case of a huge archive the purge
operation slow downs the read/write performance of the system too. To
reduce the load at start it is a good idea to postpone this operation.
## 2019-03-29
### Added
- Added index file check during archive scan and rebuild them
if corrupt. The lack of a check sometimes caused CAPS to
freeze while starting up.
## 2018-12-11
### Added
- Added support for SC3 schema 0.11.
## 2018-10-18
### Fixed
- Spin up threads correctly in case of erroneous configuration
during life reconfiguration.
## 2018-10-17
### Fixed
- Reinitalize server ports correctly after reloading the access list. This
was not a functional bug, only a small memory leak.
## 2018-09-14
### Fixed
- High IO usage while data storage purge. In worst case the purge operation
could slow down the complete system so that incoming packets could not be
handled anymore.
## 2018-09-05
### Added
- Access rule changes do not require a restart of the server anymore.
## 2018-08-29
### Changed
- Assigned human readable descriptions to threads. Process information tools
like top or htop can display this information.
## 2018-08-08
### Changed
- Reduced server load for real-time client connections.
## 2018-05-30
### Fixed
- Fixed unexpected closed SSL connections.
## 2018-05-25
### Fixed
- Fixed high load if many clients request many streams in real-time.
## 2018-05-18
### Added
- Add option to log anonymous IP addresses.
## 2018-04-17
### Fixed
- Improved handling of incoming packets to prevent packet loss to subscribed
sessions in case of heavy load.
## 2018-03-08
### Fixed
- Fixed access list evaluator. Rather than replacing general rules with concrete
rules they are now merged hierarchically.
## 2018-02-13
### Added
- Restrict plugin stream codes to [A-Z][a-z][0-9][-_] .
## 2018-01-31
### Changed
- CAPS archive log will be removed at startup and written at shutdown. With
this approach we want to force a rescan of the complete archive in case of
an unexpected server crash.
## 2018-01-30
### Fixed
- Fixed parameter name if HTTP SSL port, which should be `AS.http.SSL.port`
but was `AS.SSL.http.port`.
## 2018-01-29
### Fixed
- Fixed caps protocol real time handler bug which caused gaps on client-side
when retrieving real time data.
## 2018-01-26
### Changed
- Log requests per CAPS server instance.
### Fixed
- Improved data scheduler to hopefully prevent clients from stalling the
plugin input connections.
## 2018-01-02
### Fixed
- Fixed bug in combination with SSL connections that caused CAPS to not
accept any incoming connections after some time.
## 2017-11-15
### Added
- Added option `AS.inventory` which lets CAPS read an SC3 inventory XML
file to be used together with WWS requests to populate channel geo locations
which will enable e.g. the map feature in Swarm.
## 2017-11-14
### Fixed
- Data store start time calculation in case of the first record start time is
greater than the requested one.
## 2017-11-08
### Fixed
- WWS Heli request now returns correct timestamps for data with gaps.
## 2017-10-13
### Fixed
- FDSN request did not return the first record requested.
## 2017-08-30
### Fixed
- Segmentation fault caused by invalid FDSN request.
- Timing bug in the CAPS WWS protocol implementation.
## 2017-06-15
### Added
- Add `AS.minDelay` which delays time window requests for the specified
number of seconds. This parameter is only effective with FDSNWS and WWS.
## 2017-05-30
### Feature
- Add experimental Winston Wave Server(WWS) support. This feature is disabled
by default.
## 2017-05-09
### Feature
- Add FDSNWS dataselect support for archives miniSEED records. This
support is implicitely enabled if HTTP is activated.
## 2017-05-03
### Feature
- Support for SSL and authentication in AS, client and HTTP transport.
## 2017-03-24
### Fixed
- MSEED support.
## 2017-03-09
### Changed
- Moved log output that the index was reset and that an incoming
record has not ignored to debug channel.
## 2016-06-14
### Added
- Added option `AS.clientBufferSize` to configure the buffer
size for each client connection. The higher the buffer size
the better the request performance.
## 2016-06-09
### Added
- Added out-of-order requests for clients. The rsas plugin with
version >= 0.6.0 supports requesting out-of-order packets with
parameter `ooo`, e.g. `caps://localhost?ooo`.
- Improved record insertion speed with out-of-order records.
## 2016-03-09
### Fixed
- Low packet upload rate.

View File

@ -0,0 +1,108 @@
# Change Log
All notable changes to sproc2caps will be documented in this file.
## 2024.351
### Fixed
- Compatibility with upcoming SeisComP release
## 2024.262
### Fixed
- MiniSEED encoding allows half a sample timing tolerance
to detect contiguous records.
## 2024.257
### Fixed
- Memory leak
## 2024.234
### Fixed
- Output sampling rate when input sampling rate is a fraction
## 2024.233
### Added
- The option `--stop` terminates the data processing when the data input and
processing is complete.
## 2023.225
### Changed
- Make stream map reading slightly more error-tolerant
## 2023.289
### Fixed
- When using a stream as input several times just the last registered
stream was used.
## 2023.151
### Fixed
- Inventory loading from file
## 2021-04-29
### Added
- Add SSL and authentication support for the output connection.
With this version the data output URL can be set with the
config option ``output.address``. The formal definition
of the field is: [[caps|capss]://][user:pass@]host[:port] e.g.
```
output.address = capss://caps:caps@localhost:18003
```
The new output.address parameter superseds the output.host and
output.port parameter of previous versions and takes precedence.
The old parameters are kept for compatibility reasons but are
marked as deprecated.
## 2021-04-27
### Fixed
- Expression handling. So far it was not possible to
overwrite expressions on stream level.
## 2020-04-07
### Fixed
- Sequential rules where the result stream is the input of another rule
## 2020-04-06
### Changed
- Support to set expression for each stream independently. If the expression
is omitted the expression configured in `streams.expr` is used.
```
XX.TEST1..HHZ XX.TEST2..HHZ XX.TEST3..HHZ?expr=x1+x2
```
## 2020-02-17
### Changed
- Increase default timeout for acknowledgement messages from 5s to 60s
## 2019-11-25
### Added
- Documentation

View File

@ -0,0 +1,355 @@
.. |nbsp| unicode:: U+00A0
.. |tab| unicode:: U+00A0 U+00A0 U+00A0 U+00A0
.. _sec-archive:
Data Management
***************
:term:`CAPS` uses the :term:`SDS` directory
structure for its archives shown in figure :num:`fig-archive`. SDS organizes
the data in directories by year, network, station and channel.
This tree structure eases archiving of data. One complete year may be
moved to an external storage, e.g. a tape library.
.. _fig-archive:
.. figure:: media/sds.png
:width: 12cm
SDS archive structure of a CAPS archive
The data are stored in the channel directories. One file is created per sensor
location for each day of the year. File names take the form
:file:`$net.$sta.$loc.$cha.$year.$yday.data` with
* **net**: network code, e.g. 'II'
* **sta**: station code, e.g. 'BFO'
* **loc**: sensor location code, e.g. '00'. Empty codes are supported
* **cha**: channel code, e.g. 'BHZ'
* **year**: calender year, e.g. '2021'
* **yday**: day of the year starting with '000' on 1 January
.. note ::
In contrast to CAPS archives, in SDS archives created with
`slarchive <https://docs.gempa.de/seiscomp/current/apps/slarchive.html>`_
the first day of the year, 1 January, is referred to by index '001'.
.. _sec-caps-archive-file-format:
File Format
===========
:term:`CAPS` uses the `RIFF
<http://de.wikipedia.org/wiki/Resource_Interchange_File_Format>`_ file format
for data storage. A RIFF file consists of ``chunks``. Each chunk starts with a 8
byte chunk header followed by data. The first 4 bytes denote the chunk type, the
next 4 bytes the length of the following data block. Currently the following
chunk types are supported:
* **SID** - stream ID header
* **HEAD** - data information header
* **DATA** - data block
* **BPT** - b-tree index page
* **META** - meta chunk of the entire file containing states and a checksum
Figure :num:`fig-file-one-day` shows the possible structure of an archive
file consisting of the different chunk types.
.. _fig-file-one-day:
.. figure:: media/file_one_day.png
:width: 18cm
Possible structure of an archive file
SID Chunk
---------
A data file may start with a SID chunk which defines the stream id of the
data that follows in DATA chunks. In the absence of a SID chunk, the stream ID
is retrieved from the file name.
===================== ========= =====================
content type bytes
===================== ========= =====================
id="SID" char[4] 4
chunkSize int32 4
networkCode + '\\0' char* len(networkCode) + 1
stationCode + '\\0' char* len(stationCode) + 1
locationCode + '\\0' char* len(locationCode) + 1
channelCode + '\\0' char* len(channelCode) + 1
===================== ========= =====================
HEAD Chunk
----------
The HEAD chunk contains information about subsequent DATA chunks. It has a fixed
size of 15 bytes and is inserted under the following conditions:
* before the first data chunk (beginning of file)
* packet type changed
* unit of measurement changed
===================== ========= ========
content type bytes
===================== ========= ========
id="HEAD" char[4] 4
chunkSize (=7) int32 4
version int16 2
packetType char 1
unitOfMeasurement char[4] 4
===================== ========= ========
The ``packetType`` entry refers to one of the supported types described in
section :ref:`sec-packet-types`.
DATA Chunk
----------
The DATA chunk contains the actually payload, which may be further structured
into header and data parts.
===================== ========= =========
content type bytes
===================== ========= =========
id="DATA" char[4] 4
chunkSize int32 4
data char* chunkSize
===================== ========= =========
Section :ref:`sec-packet-types` describes the currently supported packet types.
Each packet type defines its own data structure. Nevertheless :term:`CAPS`
requires each type to supply a ``startTime`` and ``endTime`` information for
each record in order to create seamless data streams. The ``endTime`` may be
stored explicitly or may be derived from ``startTime``, ``chunkSize``,
``dataType`` and ``samplingFrequency``.
In contrast to a data streams, :term:`CAPS` also supports storing of individual
measurements. These measurements are indicated by setting the sampling frequency
to 1/0.
BPT Chunk
---------
BPT chunks hold information about the file index. All data records are indexed
using a B+ tree. The index key is the tuple of start time and end time of each
data chunk to allow very fast time window lookup and to minimize disc accesses.
The value is a structure and holds the following information:
* File position of the format header
* File position of the record data
* Timestamp of record reception
This chunk holds a single index tree page with a fixed size of 4kb
(4096 byte). More information about B+ trees can be found at
https://en.wikipedia.org/wiki/B%2B_tree.
META Chunk
----------
Each data file contains a META chunk which holds information about the state of
the file. The META chunk is always at the end of the file at a fixed position.
Because CAPS supports pre-allocation of file sizes without native file system
support to minimize disc fragmentation it contains information such as:
* effectively used bytes in the file (virtual file size)
* position of the index root node
* the number of records in the file
* the covered time span
and some other internal information.
.. _sec-optimization:
Optimization
============
After a plugin packet is received and before it is written to disk,
:term:`CAPS` tries to optimize the file data in order reduce the overall data
size and to increase the access time. This includes:
* **merging** data chunks for continuous data blocks
* **splitting** data chunks on the date limit
* **trimming** overlapped data
Merging of Data Chunks
----------------------
:term:`CAPS` tries to create large continues blocks of data by reducing the
number of data chunks. The advantage of large chunks is that less disk space is
occupied by data chunk headers. Also seeking to a particular time stamp is
faster because less data chunk headers need to be read.
Data chunks can be merged if the following conditions apply:
* merging is supported by packet type
* previous data header is compatible according to packet specification, e.g.
``samplingFrequency`` and ``dataType`` matches
* ``endTime`` of last record equals ``startTime`` of new record (no gap)
Figure :num:`fig-file-merge` shows the arrival of a new plugin packet. In
alternative A) the merge failed and a new data chunk is created. In alternative B)
the merger succeeds. In the latter case the new data is appended to the existing
data block and the original chunk header is updated to reflect the new chunk
size.
.. _fig-file-merge:
.. figure:: media/file_merge.png
:width: 18cm
Merging of data chunks for seamless streams
Splitting of Data Chunks
------------------------
Figure :num:`fig-file-split` shows the arrival of a plugin packet containing
data of 2 different days. If possible, the data is split on the date limit. The
first part is appended to the existing data file. For the second part a new day
file is created, containing a new header and data chunk. This approach ensures
that a sample is stored in the correct data file and thus increases the access
time.
Splitting of data chunks is only supported for packet types providing the
``trim`` operation.
.. _fig-file-split:
.. figure:: media/file_split.png
:width: 18cm
Splitting of data chunks on the date limit
Trimming of Overlaps
--------------------
The received plugin packets may contain overlapping time spans. If supported by
the packet type :term:`CAPS` will trim the data to create seamless data streams.
.. _sec-packet-types:
Packet Types
============
:term:`CAPS` currently supports the following packet types:
* **RAW** - generic time series data
* **ANY** - any possible content
* **MiniSeed** - native :term:`MiniSeed`
.. _sec-pt-raw:
RAW
---
The RAW format is a lightweight format for uncompressed time series data with a
minimal header. The chunk header is followed by a 16 byte data header:
============================ ========= =========
content type bytes
============================ ========= =========
dataType char 1
*startTime* TimeStamp [11]
|tab| year int16 2
|tab| yDay uint16 2
|tab| hour uint8 1
|tab| minute uint8 1
|tab| second uint8 1
|tab| usec int32 4
samplingFrequencyNumerator uint16 2
samplingFrequencyDenominator uint16 2
============================ ========= =========
The number of samples is calculated by the remaining ``chunkSize`` divided by
the size of the ``dataType``. The following data types value are supported:
==== ====== =====
id type bytes
==== ====== =====
1 double 8
2 float 4
100 int64 8
101 int32 4
102 int16 2
103 int8 1
==== ====== =====
The RAW format supports the ``trim`` and ``merge`` operation.
.. _sec-pt-any:
ANY
---
The ANY format was developed to store any possible content in :term:`CAPS`. The chunk
header is followed by a 31 byte data header:
============================ ========= =========
content type bytes
============================ ========= =========
type char[4] 4
dataType (=103, unused) char 1
*startTime* TimeStamp [11]
|tab| year int16 2
|tab| yDay uint16 2
|tab| hour uint8 1
|tab| minute uint8 1
|tab| second uint8 1
|tab| usec int32 4
samplingFrequencyNumerator uint16 2
samplingFrequencyDenominator uint16 2
endTime TimeStamp 11
============================ ========= =========
The ANY data header extends the RAW data header by a 4 character ``type``
field. This field is indented to give a hint on the stored data. E.g. an image
from a Web cam could be announced by the string ``JPEG``.
Since the ANY format removes the restriction to a particular data type, the
``endTime`` can no longer be derived from the ``startTime`` and
``samplingFrequency``. Consequently the ``endTime`` is explicitly specified in
the header.
Because the content of the ANY format is unspecified it neither supports the
``trim`` nor the ``merge`` operation.
.. _sec-pt-miniseed:
MiniSeed
--------
`MiniSeed <http://www.iris.edu/data/miniseed.htm>`_ is the standard for the
exchange of seismic time series. It uses a fixed record length and applies data
compression.
:term:`CAPS` adds no additional header to the :term:`MiniSeed` data. The
:term:`MiniSeed` record is directly stored after the 8-byte data chunk header.
All meta information needed by :term:`CAPS` is extracted from the
:term:`MiniSeed` header. The advantage of this native :term:`MiniSeed` support
is that existing plugin and client code may be reused. Also the transfer and
storage volume is minimized.
Because of the fixed record size requirement neither the ``trim`` nor the
``merge`` operation is supported.
.. TODO:
\subsection{Archive Tools}
\begin{itemize}
\item {\tt\textbf{riffsniff}} --
\item {\tt\textbf{rifftest}} --
\end{itemize}

View File

@ -0,0 +1,3 @@
.. _sec-changelog-caps2caps:
.. mdinclude:: CHANGELOG-caps.md

View File

@ -0,0 +1,3 @@
.. _sec-changelog-python:
.. mdinclude:: CHANGELOG-python.md

View File

@ -0,0 +1,3 @@
.. _sec-changelog-rs2caps:
.. mdinclude:: CHANGELOG-rs.md

View File

@ -0,0 +1,3 @@
.. _sec-changelog-server:
.. mdinclude:: CHANGELOG-server.md

View File

@ -0,0 +1,3 @@
.. _sec-changelog-slink2caps:
.. mdinclude:: CHANGELOG-seedlink.md

View File

@ -0,0 +1,3 @@
.. _sec-changelog-sproc:
.. mdinclude:: CHANGELOG-sproc.md

View File

@ -0,0 +1,435 @@
.. _sec-caps-config:
Execution and Automatic Startup
===============================
|appname| uses the
|scname| infrastructure for startup, configuration and logging. Please refer to
the |scname| `documentation <http://docs.gempa.de/seiscomp/current>`_ for a
comprehensive description of |scname|.
Figure :num:`fig-scconfig` shows a screen shot of ``scconfig``, which
is the central |scname| GUI allowing to configure, start and monitor the
|appname| server.
.. _fig-scconfig:
.. figure:: media/scconfig.png
:width: 18cm
:align: center
scconfig: |scname| utility allowing to configure, start and monitor :term:`CAPS`.
On the command line the following sequence may be used to enable, start and
monitor the |appname|:
.. code-block:: sh
seiscomp enable caps
seoscomp start caps
seiscomp check caps
Dependent on the configured log level :term:`CAPS` will log to
:file:`~/.seiscomp/log/caps`. For debugging purposes it is a good practice to
stop the :term:`CAPS` background process and run it in the foreground using
the :option:`--debug` switch:
.. code-block:: sh
seiscomp stop caps
seiscomp exec caps --debug
File System Tuning
==================
Depending on the number of streams a :term:`CAPS` server handles a number of
settings can improve the I/O throughput and overall performance. Since
channel data are organized in an archive structure where each stream is written
into a dedicated file, CAPS needs to open and close a lot of files if thousands
of streams are fed into it. In the default configuration CAPS caches up to
250 open files for later reuse. An open file here is not only the data file
for the CAPS stream but might also include the index file if records have
been received out-of-order. So in the default configuration CAPS need to open
500 file at the same time.
Operating systems control the maximum number of open file descriptors a process
might hold. Often a default value is 1024. If the maximum open files in CAPS
should be increased to 2000 (assuming CAPS manages 2000 streams) then the
limit for the user who runs CAPS should be increased to at least 4000. In
many Linux distributions :program:`ulimit` can be used for that.
Furthermore CAPS requires file descriptors for incoming connections. Each
active connection holds a socket descriptor for network communication and
a file descriptor (or two if index files are present) for reading data.
Depending on the number of concurrent connections one is expecting, it would
be safe to add this number times three to the user limit in the operating
system.
Example for 2000 streams:
.. code-block:: properties
# The maximum number of open files managed by CAPS.
# 2000 + margin
AS.filebase.cache.openFileLimit = 2100
.. code-block:: sh
# Set ulimit to 7500 files: 2100 * 2 + 1000 * 3 (network)
$ ulimit -n 7200
.. _sec-caps-security:
Security and Access Control
===========================
.. _sec-conf-access:
Access control
--------------
:term:`CAPS` provides access control on the
:ref:`service<sec-conf-access-serv>` and :ref:`stream<sec-conf-access-stream>`
level. On the service level access can be granted by client IP, on the stream
level by client IP or user/group name obtained during
:ref:`authentication<sec-conf-access-auth>`. In
addition :ref:`read and write permission<sec-conf-access-passwd>` may be
granted for individual users and groups. The configuration is described in the
following sections.
.. _sec-conf-access-serv:
Service level access
~~~~~~~~~~~~~~~~~~~~
Service level access is defined in the main caps configuration file, e.g.
``@SYSTEMCONFIGDIR@/caps.cfg``
The following services are availble:
* Plugin - Incoming data send by :ref:`CAPS plugins<sec-caps-plugins>`,
configuration prefix: ``AS.plugin``
* Client - Default CAPS client protocol, e.g. used by the
:ref:`CAPS recordstream<sec-caps-recstream>` or by the :ref:`capstool`,
configuration prefix: ``AS``
* HTTP - :ref:`Administrative web interface<sec-caps-web-interface>` and
:ref:`FDSNWS dataselect service<sec-caps-fdsnws>`, configuration prefix:
``AS.http``
* WWS - :ref:`sec-caps-wws`, configuration prefix: ``AS.WWS``
For each sevice access can be granted on IP level through allow and deny rule
sets. By default no restrictions are in place. If an allow rule is present
access is only granted to matching IPs. Deny rules may be used to override a
subset of the IP range defined in the allow set.
The formal definition of a rule is:
``IP_MASK[, IP_MASK[, ...]]``
where ``IP_MASK`` may be a single address or a subnet described by a network
mask.
Using the HTTP service as an example the configuration options
are ``AS.http.allow`` and ``AS.http.deny``.
Example:
.. code-block:: properties
AS.http.allow = 192.168.1.0/24
AS.http.deny = 192.168.1.42
These rules provide access to the HTTP service for all clients of the
192.168.1.0/24 subnet except for the IP 192.168.1.42.
.. _sec-conf-access-stream:
Stream level access
~~~~~~~~~~~~~~~~~~~
Stream level access is controlled by an access file defined by
``AS.auth.basic.access-list``.
Each line of the file consists of a ALLOW or DENY rule. The formal definition of
one rule is:
``STREAMID.ALLOW|DENY= IP_MASK|USER|%GROUP[, IP_MASK|USER|%GROUP[, ...]]``
where
* ``STREAMID`` is defined as: ``[NET[.STA[.LOC[.CHA]]]]``. Regular expressions
are not supported.
* ``USER`` is a user account defined in the :ref:`shadow<sec-conf-access-auth>`
file or the special id ``all``.
* ``GROUP`` is a user group definition from the :ref:`group<sec-conf-access-group>`
file. A ``%`` must be placed before the group name to distinguish it from
a user.
.. note::
For access control, two cases must be distinguished:
1. Client access without username and password
All client sessions have guest permissions when no login credentials are provided. By default
data can be read and written. The guest account can be restricted by IP rules only. Please have in
mind that for instance the rule DENY=all does not have any effect here.
2. Client access with username and password
In this case user rules will be evaluated only and IP restrictions have no effect. In addition
user rules does not apply to the guest user. This leads to that DENY=all prohibits access for
all users except the guest user. If the access should be denied for all users the following rule
must be used: DENY=all, 0.0.0.0/0.
This leads to that the rule DENY = all prohibits data access for all users but anonymous logins can still access data. If guest access should also be prohibited the rule must be extended by an IP address.
By default access is unrestricted. If a stream ID is not matched by any access
rule then access will be granted. This behavior is different from the service
level access where an allow rule will implicitly revoke access to any non
matching IP.
To restrict access by default you may add a global DENY rule which references no
stream id and which matches all IP addresses and all users using the special
user id ``all``:
.. code-block:: properties
DENY = 0.0.0.0/0, all
The rules in the access file are evaluated independent of the order in which
they are defined. A rule with more stream id components overrules a more generic
line. E.g., considering a request from the local machine the following rule set
would
* grant access to all networks except for AM
* grant access to station AM.R0000 except for the stream 00.ENN stream
.. code-block:: properties
AM.DENY = 127.0.0.1
AM.R0000.ALLOW = 127.0.0.1
AM.R0000.00.ENN.DENY = 127.0.0.1
The client IP is **only** evaluated in the absence of user authentication. E.g., the
following rule would block access to any anonymous user but still grant access
to any authenticated user:
.. code-block:: properties
DENY = 0.0.0.0/0
Please refer to :ref:`sec-conf-access-user-serv` for a definition of service
specific users.
The following example shows how anonymous access by IP and access by user name
may be combined:
.. code-block:: properties
AM.DENY = 0.0.0.0/0, all
AM.ALLOW = 127.0.0.1, %group1, user1
AM.R0000.ALLOW = user2
AM.R0000.DENY = user1
The example above
* grants access to anybody except for the AM network
* grants access to the AM network for
* anonymous users on the same machine
* users belonging to the ``group1`` group
* the user ``user1``
* in addition grants access to the station AM.R0000 to the user ``user2`` while
local anonymous users and authenticated users of the ``group1`` would still
have access
* explicitly denies access to station AM.R0000 for ``user1``
The stream level access can be tested and debugged on the command line by
specifying a stream and (optionally) an IP to test for:
.. code-block:: sh
$ caps -v --print-access AM.R0000.00.ENN 1.2.3.4
.. _sec-conf-access-auth:
Authentication by user name and password (shadow file)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Authentication can be used, e.g. together with the :ref:`capss RecordStream <sec-caps-recstream>`
or :ref:`capstool`.
It is performed against a shadow file defined by
``AS.auth.basic.users.shadow``. It contains the user name and password information
for the user accounts. Each line consist of a user name and password hash
separated by a colon (``:``). The formal definition of one line is:
``USER:PWD_HASH``.
To encrypt a password ``mkpasswd`` can be used. It is recommended to apply a
strong algorithm such as sha-256 or sha-512. The command
.. code-block:: sh
$ user=sysop pw=`mkpasswd -m sha-512` && echo $user:$pw
generates a password hash for user sysop.
An empty password is represented by an asterisk (``*``).
Example:
.. code-block:: properties
# The user name is equal to the password
user1:$6$mZM8gpmKdF9D$wqJo1HgGInLr1Tmk6kDrCCt1dY06Xr/luyQrlH0sXbXzSIVd63wglJqzX4nxHRTt/I6y9BjM5X4JJ.Tb7XY.d0
user2:$6$zE77VXo7CRLev9ly$F8kg.MC8eLz.DHR2IWREGrSwPyLaxObyfUgwpeJdQfasD8L/pBTgJhyGYtMjUR6IONL6E6lQN.2QLqZ5O5atO/
FDSNWS:*
.. _sec-conf-access-guest:
Guest user
~~~~~~~~~~
The CAPS server ships with a pre-configured anonymous user identified by
``guest``. It may be used during login at the
:ref:`web interface<sec-caps-web-interface>` in which case access is authorized
against the client IP.
The guest user may be assigned to a :ref:`user group <sec-conf-access-group>`
and its :ref:`access properties<sec-conf-access-passwd>` may be defined.
Anonymous access may be disabled through IP-based DENY rules in the
:ref:`access control<sec-conf-access-stream>` list file.
.. _sec-conf-access-user-serv:
Service-specific users
~~~~~~~~~~~~~~~~~~~~~~
For some services it might be desirable to disable the authentication entirely.
This can be archived by adding one of the special service specific users to the
:ref:`shadow file<sec-conf-access-auth>` followed by an asterisk indicating
an empty password. Optionally :ref:`stream specific access<sec-conf-access>`
can be granted or revoked to this user as well. The flowing users are available
for the individual services:
* HTTP - Access to the :ref:`web interface<sec-caps-web-interface>`
* FDSNWS - Access to :ref:`sec-caps-fdsnws` dataselect service served through
the HTTP protocol (``/fdsnws/dataselect/1/query``)
* WWS - Access to the :ref:`sec-caps-wws` Protocol
.. _sec-conf-access-group:
Groups
~~~~~~
A group file, defined by ``AS.auth.basic.users.group``, allows to assign users
to groups. Each line of the file consists of a group name followed by a user
list. The formal definition of one rule is:
``GROUP: USER[, USER[, ...]]``
where
* ``GROUP`` is the name of the new group definition
* ``USER`` is a user account defined in the :ref:`shadow<sec-conf-access-auth>`
file or the special id ``guest``
Example:
.. code-block:: properties
group1: user1, user2
A group may by referenced by the
:ref:`access control<sec-conf-access-stream>` or
:ref:`sec-conf-access-passwd` file. In both cases a ``%`` prefix is required to
distinguish it from a user name.
.. _sec-conf-access-passwd:
Passwd: user access properties
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In addition to :ref:`authentication by user name and password<sec-conf-access-stream>`,
user access control properties can be set in a
passwd file defined by ``AS.auth.basic.users.passwd``. The formal definition of
a line is
``USER|%GROUP:PROP[, PROP[, ...]]``
where
* ``USER`` is a user account defined in the :ref:`shadow<sec-conf-access-auth>`
file or one of the special ids ``all`` or ``guest``.
* ``GROUP`` is a user group definition from the :ref:`group<sec-conf-access-group>`
file. A ``%`` must be placed before the group name to distinguish it from
a user.
* ``PROP`` is a property granted to the user or group. The following properties
are currently supported:
* read - Grants permission to request data from the server
* write - Grants permission to store data into the server
* admin - Grants permission to request server statistics and the view server
statistics on the :ref:`server website <sec-caps-web-interface>`.
By default read and write permissions are granted to the
:ref:`guest user<sec-conf-access-guest>` and all authenticated users not
listed in this file.
The following example changes this and revokes read and write permissions per
default. Read access is provided to anonymous and users belonging to the
``group1`` while write access is only granted to ``user1``.
.. code-block:: properties
all:
guest: read
%group1: read
user1: read,write
.. _sec-conf-ssl:
Secure sockets layer (SSL)
--------------------------
The Secure Sockets Layer (SSL) is a standard for establishing a secured
communication between applications using insecure networks. Neither client
requests nor server responses are readable by communication hubs in between. SSL
is based on a public-key infrastructure (PKI) to establish trust about the
identity of the communication counterpart. The concept of a PKI is based on
public certificates and private keys.
The following example illustrates how to generate a self-signed certificate
using the OpenSSL library:
.. code-block:: sh
$ openssl req -new -x509 -sha512 -newkey rsa:4096 -out caps.crt -keyout caps.key -nodes
The last parameter ``-nodes`` disables the password protection of the private
key. If omitted, a password must be defined which will be requested when
accessing the private key. :term:`CAPS` will request the password on the command
line during startup.
To enable SSL in :term:`CAPS` the ``AS.SSL.port`` as well as the location of the
``AS.SSL.certificate`` and ``AS.SSL.key`` file must be specified.
Optionally the unencrypted ``AS.port`` may be deactivated by setting a value
of ``-1``.
.. include:: /apps/caps.rst
:start-line: 10

View File

@ -0,0 +1,352 @@
.. _sec-caps-retrieval:
Access Data on a CAPS Server
============================
A range of tools is available to access data and information on a CAPS server.
.. csv-table::
:header: "Name", "SW Package", "Description"
:widths: 15,15,70
":ref:`capstool <sec-caps-capstool>`","caps-tools","A command-line tool for retrieving data and meta information from a CAPS server"
":ref:`rifftool <sec-caps-file>`","caps-tools","A command-line tool for data inspection and extraction from individual CAPS data files (RIFF), e.g., in a CAPS archive"
":ref:`capssds <sec-caps-file>`","caps-tools","A virtual overlay file system presenting a CAPS archive directory as a read-only SDS archive with no extra disk space requirement."
":ref:`caps_plugin <sec-caps-seedlink>`","seiscomp","The plugin fetches miniSEED and :ref:`RAW <sec-pt-raw>` data from a CAPS server and provides the data to :program:`seedlink`"
":ref:`caps / capss<sec-caps-recstream>`","seiscomp","The RecordStream implementations for |appname|"
":ref:`cap2caps <sec-caps2caps>`","caps-plugins","Automatic or interactive synchronization of two CAPS servers"
":ref:`web interface <sec-caps-web>`","caps-server","The web interface provided by the CAPS server"
":ref:`FDSNWS <sec-caps-fdsnws>`","caps-server","FDSNWS dataselect interface provided by the CAPS server"
":ref:`WWS <sec-caps-wws>`","caps-server","Winston Waveform Server interface provided by the CAPS server"
":ref:`scardac <sec-caps-dataavailability>`","seiscomp","A command-line tool for generating availability information from CAPS archive"
.. _sec-caps-recstream:
RecordStream: caps/capss
------------------------
|scname| applications access waveform data through the
:term:`RecordStream` interface.
To fetch data from a CAPS server specific RecordStream implementations may be used:
* *caps*: regular RecordStream implementation to access the CAPS server,
* *capss*: RecordStream implementation to access the CAPS server secured by SSL,
user name and password. Similar to *https*, *capss* will establish a Secure Socket
Layer (SSL) communication.
.. _sec-caps-rs-config:
Configuration
~~~~~~~~~~~~~
In order to make use of the *caps* or the *capss* RecordStream configure the
RecordStream URL in :confval:`recordstream`. Let it point to the CAPS server
providing the data. Examples for *caps* and *capss*:
.. code-block:: properties
recordstream = caps://server:18002
recordstream = capss://foo:bar@server:18022
:ref:`Optional parameters <sec-caps-opt-params>` are available for
*caps*/*capss*.
.. note::
While the *caps*/*capss* :term:`RecordStream` provides data in real time
and from archive, some modules, e.g., :cite:t:`scart`, :cite:t:`fdsnws` or
:cite:t:`gis` should be strictly limited to reading from archive only by
the option ``arch``:
.. code-block:: properties
recordstream = caps://server:18002?arch
recordstream = capss://foo:bar@server:18022?arch
Otherwise requests attempting to fetch missing data may hang forever.
.. _sec-caps-opt-params:
Optional Parameters
~~~~~~~~~~~~~~~~~~~
Optional RecordStream parameters which can be combined:
- ``arch`` - read from CAPS archive only,
- ``ooo`` - out of order, data are fetched and provided in the order of their arrival in the CAPS server, useful for analysing if data have arrived out of order,
- ``pass`` - password if server requires authentication,
- ``request-file`` - file specifying the streams to be requested. One stream per line. Per line: net sta loc stream startTime endTime,
- ``timeout`` - timeout of acquisition in seconds. Data acquisition will be restarted,
- ``user`` - user name if server requires authentication.
.. csv-table::
:header: "URL", "Description"
"caps://server:18002?arch","Read data from CAPS archive. Stop data acquisition when all available waveforms are fetched."
"caps://server:18002?ooo","Fetch data in the original order of arrival."
"caps://server:18002?request-file=request.txt","Request only streams in time intervals given in request.txt"
"caps://server:18002?timeout=5","Apply a timeout of 5 seconds."
"capss://server:18022?user=foo&pass=bar", "Use secure protocol (SSL) with user
name and password. Read the section on :ref:`sec-conf-access-auth` for details
on the generation of user names and passwords."
Combination with other RecordStream interfaces
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The caps and the capss RecordStream may be combined with other
:term:`RecordStream` interfaces.
Examples:
#. Decimation
Use the decimation :term:`RecordStream` :cite:p:`recordstream`
interface to fetch data from a CAPS server running on localhost decimated to
1 sample per second.
*global configuration:*
.. code-block:: sh
recordstream = dec://caps/localhost:18002?rate=1
*command line parameter:*
.. code-block:: sh
-I dec://caps/localhost:18002?rate=1
#. Resample
Use the resample :term:`RecordStream` :cite:p:`recordstream`
interface to fetch data from a CAPS server running on localhost resampled to
16 samples per second.
*global configuration:*
.. code-block:: sh
recordstream = resample://caps/localhost:18002?rate=16
*command line parameter:*
.. code-block:: sh
-I resample://caps/localhost:18002?rate=16
.. _sec-caps-capstool:
CAPS command-line interface (CLI) client: capstool
--------------------------------------------------
:ref:`capstool` is a CAPS client application for retrieving data and listing
available streams from an operational CAPS server. The CAPS server may run
locally or remotely as all communication is performed over the network
(option:`-H`).
Data requests are based on time windows and stream IDs. The data is provided in
its origin format on stdout or, with :option:`-o` as a single file. In
particular capstool may be used to fetch miniSEED data and create miniSEED
files.
:ref:`capstool` can also be used for :ref:`testing the server
<sec-caps-server-testing>` as it provides information on available streams with
their time window (:option:`-Q`, :option:`-I`).
.. _sec-caps-file:
Data file access: rifftool/capssds
----------------------------------
The data files in the CAPS archive contain a small additional header describing
the data format and implementing an index for fast and in-order data retrieval.
Read the :ref:`format documentation <sec-packet-types>` for more details. In
contrast to miniSEED files in :term:`SDS` archives created, e.g., by
:cite:t:`slarchive` or :cite:t:`scart`, the original miniSEED files stored in
the CAPS archive cannot be directly read by common seismological applications.
You may access data files directly:
* Test and retrieve data files using :ref:`rifftool`. rifftool addresses
individual files directly and does not require the CAPS server to be running.
* Run :ref:`capssds` to create a virtual overlay file system presenting a CAPS
archive directory as a read-only :term:`SDS` archive with no extra disk space
requirement. The CAPS archive directory and file names are mapped. An
application reading from a file will only see :term:`miniSEED` records ordered
by record start time. You may connect to the virtual SDS archive using the
RecordStream SDS or directly read the single :term:`miniSEED` file. Other
seismological software such as ObsPy or Seisan may read directly from the SDS
archive of the files therein.
.. _sec-caps2caps:
Synchronize with another CAPS server: caps2caps
-----------------------------------------------
Use :ref:`caps2caps` to synchronize your CAPS server with another one. You may push
or pull data on either side. In contrast to the generation of regular :term:`SDS`
archives, e.g., by :program:`scart`, the CAPS server will not generate duplicate
data records if executing :ref:`caps2caps` multiple times. While synchronizing
observe the :ref:`web interface <sec-caps-web-interface>` for the statistics of
received, written or rejected data packages.
.. _sec-caps-seedlink:
Connect from SeedLink: caps_plugin
----------------------------------
The :ref:`caps_plugin` plugin fetches data from a CAPS server and provides the
data to :program:`seedlink`. The plugin can be configured and started as any
other plugin for :program:`seedlink` or executed on demand. Select *caps* when
choosing the plugin in the seedlink binding configuration.
**Examples:**
* Fetch data from a remote CAPS server and make them available in your :program:`seedlink` instance:
#. configure the :ref:`caps_plugin` plugin in the bindings configuration of your
:program:`seedlink` instance pointing to the remote CAPS server
#. enable and start :program:`seedlink`
.. code-block:: sh
seiscomp enable seedlink
seiscomp start seedlink
* Provide data from a CAPS server by seedlink on the same machine to external clients:
#. create another seedlink instance, e.g., :program:`seedlinkP`:
.. code-block:: sh
seiscomp alias create seedlinkP seedlink
#. configure the :ref:`caps_plugin` in the bindings configuration of :program:`seedlinkP`
pointing to the local CAPS server.
#. enable and start :program:`seedlinkP`:
.. code-block:: sh
seiscomp enable seedlinkP
seiscomp start seedlinkP
.. _sec-caps-web:
Web interface
-------------
The CAPS server ships with a :ref:`web interface <sec-caps-web-interface>`.
Beside allowing you to view server statistics, data stored stored on the server
can be downloaded for any time if the original format is :term:`miniSEED`.
For downloading miniSEED data
#. Select the stream(s) of interest.
#. Zoom in to the period of interest. Zooming in and out in time works by
right-mouse button actions just like in other |scname| GUI applications like
:cite:t:`scrttv`.
#. Click on the download button to download the miniSEED file. An error message
will be printed in case the original format is not miniSEED.
.. _fig-web-streams-download:
.. figure:: media/web_streams_download.png
:width: 18cm
Stream perspective of :term:`CAPS` Web interface allowing to download miniSEED
data for selected streams.
.. _sec-caps-fdsnws:
Built-in FDSNWS
---------------
|appname| speaks natively FDSN Web Services, FDSNWS :cite:p:`fdsn` providing
waveform data via dataselect. Information on events and stations are
not delivered. The waveform data will be delivered through the port configured in
:confval:`AS.http.port` or the port configured by your Apache server, if available.
Contact your system administrator for information on the Apache server.
Read the documentation of the :ref:`CAPS server <sec-caps-config>` for the configuration.
.. _sec-caps-wws:
Built-in Winston waveform server
--------------------------------
|appname| speeks natively Winston Waveform Server protocol, WWS, :cite:p:`wws`,
e.g., to :cite:t:`swarm` by USGS. Read the documentation of the
:ref:`CAPS server<sec-caps-config>` for the configuration.
.. _sec-caps-dataavailability:
Data availability information
-----------------------------
Web interface
~~~~~~~~~~~~~
The :ref:`Channels perspective of the CAPS web interface <sec-caps-web>`
indicates periods of availability and of gaps on level of network, station,
sensor location and channel. The resolution of colors and percentages is linked
to the granularity of the data detection which increases with shorter time
windows in order to optimize the speed of the calculation.
capstool
~~~~~~~~
The CAPS server stores information on received data segments
including their start and end times. Information on resulting gaps can be
retrieved by :ref:`capstool`. Example:
.. code-block:: sh
echo "2023,05,01,12,00,00 2023,05,03,00,00,00 NET * * *" | capstool -G --tolerance=0.5 -H localhost
scardac
~~~~~~~
The availability of data in the caps archive can be analyzed and written to the
|scname| database by the |scname| module :cite:t:`scardac`. For availability analysis
add the plugin *daccaps* to the list of :confval:`plugins` and URL of the caps
archive to the archive configuration of :cite:t:`scardac`. The *daccaps* plugin
ships with the gempa package *caps-server*.
Example configuration of :cite:t:`scardac` (:file:`scardac.cfg`):
.. code-block:: properties
plugins = ${plugins}, daccaps
archive = caps:///home/data/archive/caps/
Example call:
.. code-block:: sh
scardac --plugins="daccaps, dbmysql" -d localhost -a caps:///home/data/archive/caps/ --debug
.. note::
As of SeisComP in version 6, scardac has received significant optimization.
Instead of scanning the full archive, only files which have changed since the
last scan will be examined. This means that when scanning the entire archive,
the first run may be more time consuming than subsequent ones when executed
within reasonable intervals.
The data availability information can be retrieved from the database, e.g.,
using :cite:t:`fdsnws` or :cite:t:`scxmldump`.

View File

@ -0,0 +1,254 @@
.. _sec-caps-examples:
Examples and Recipes
====================
Retrieve real-time data from a CAPS server
------------------------------------------
The :ref:`listed plugins<sec-caps-plugins>` can be used for exchanging real-time data.
.. _sec-caps_data:
CAPS server to data processing modules
......................................
Use this recipe to:
- Provide data from a Caps server to data processing modules.
Recipe:
1. Configure the CAPS server in the module configuration.
#. Start the CAPS server
#. For data processing use the :ref:`caps or capss RecordStream <sec-caps-recstream>`
and configure it in the global module configuration:
.. code-block:: sh
recordstream = caps://localhost:18002
CAPS server to SeedLink clients
...............................
Use this recipe to:
* Provide data from a CAPS server to external :cite:t:`seedlink` clients.
Recipe:
#. Configure and start the CAPS server to provide the data.
#. Configure a new SeedLink instance
**Case 1 - CAPS uses SeedLink plugins for data collection:**
Configure a SeedLink client instance on a second computer which will act
as a seedlink client and server.
**Case 2 - CAPS does not use SeedLink plugins for data collection:**
Generate an alias for seedlink on the same computer which will act as a SeedLink
client and server.
#. Use the plugin *caps* in the SeedLink bindings and
configure the plugin to connect to the CAPS server,
#. Configure the new SeedLink instance,
#. Update configuration of the new SeedLink instance (no module selection),
#. Start the new SeedLink instance.
Import data into a CAPS server
------------------------------
.. _sec-caps_slink:
Real-time import with seedlink plugins
......................................
Use this recipe to:
* Fetch data from a SeedLink server or from other sources using
standard SeedLink plugins :cite:p:`seedlink` of |scname| and provide them to a
CAPS server.
Recipe:
#. Configure and start the :ref:`CAPS server <sec-caps-server>` to receive the data,
#. Choose and configure the seedlink plugin in the SeedLink bindings configuration,
#. Uncheck the parameter *loadTimeTable* in the :cite:t:`seedlink` module
configuration.
.. code-block:: sh
plugins.chain.loadTimeTable = false
#. Update the configuration.
#. Enable and start :ref:`slink2caps`.
Real-time import with CAPS plugins
..................................
Use this recipe to:
* Fetch data from external source using the CAPS-specific
:ref:`CAPS plugins <sec-caps-acqui-plugins>` and provide them to a CAPS server.
Recipe:
#. Configure and start the :ref:`CAPS server <sec-caps-server>` to receive the data,
#. Choose and configure the :ref:`CAPS plugin <sec-caps-acqui-plugins>` in the
module configuration,
#. Enable and start the plugin.
.. _sec-caps_example_offline:
Import offline data: miniSEED and other formats
...............................................
Use this recipe to:
* Populate a CAPS server with offline miniSEED or other typs of data.
Recipe:
Besides real-time data exchange, data from offline experiments, offline stations
or from other offline data archives can be fed into the caps data archive
from where they are made available by the caps server.
For example, a set of miniSEED data files (".mseed") can be pushed into the caps
archive using :ref:`rs2caps` and the :term:`RecordStream` interface "file"
(``-I file://``, *file* is default and can be omitted) along with the
:ref:`CAPS server <sec-caps-server>`.
#. Input one file (file.miniSEED), accept all streams:
.. code-block:: sh
seiscomp start caps
rs2caps -I file.mseed --passthrough
#. Input all files ending with .mseed, accept all streams:
.. code-block:: sh
seiscomp start caps
cat *.mseed | rs2caps -I - --passthrough
#. Input all files ending with .mseed, accept only streams found in the database:
.. code-block:: sh
seiscomp start caps
cat *.mseed | rs2caps -I - -d mysql://sysop:sysop@localhost/seiscomp -j ""
Real-time playbacks
-------------------
Use this recipe to:
* Play back sorted miniSEED data as in real time using :cite:t:`msrtsimul`.
Real-time playbacks can be realized using
* A combination of msrtsimul and the CAPS plugin :ref:`rs2caps` or
* :cite:t:`seedlink`.
When using rs2caps the data can be stored in the CAPS archive or not.
When using seedlink then data a kept in the seedlink buffer and
:cite:t:`slarchive` can be used to store the data in the SDS archive.
.. note::
For playbacks, the input data must be **sorted by end time**.
Real-time playback will create **events with fake times**, e.g creationTime, eventTime.
Therefore, they should be
executed on production system only in exceptional cases, e.g. for whole system
validation. Better use dedicated SeisComP3 machines. Starting msrtsimul with
the option *-m historic* preserves the time of the data records, thus the pick times.
Instead, using **offline playbacks based on XML files** may be the faster and better
option to create parameters from historic events.
Procedure using CAPS / rs2caps
..............................
#. Retrieve miniSEED data from CAPS archive using :ref:`capstool<capstool>` or
other methods.
#. Sort miniSEED records by endtime using :ref:`scmssort`:
.. code-block:: sh
scmssort -E miniSEED_file > miniSEED_file_sorted
#. Stop :ref:`caps`, :ref:`slink2caps`, :ref:`rs2caps` and all other active data
acquisition. This will stop the real-time data acquisition.
#. Execute caps on the command line without archiving the data:
.. code-block:: sh
caps --read-only
.. warning::
As the data are not archived, processing of playback data will be impossible
after stopping caps. Only in dedicated playback systems, caps should be
used normally without any additional option.
#. Playback the sorted miniSEED data using msrtsimul:
.. code-block:: sh
msrtsimul -v -c miniSEED_file_sorted | rs2caps -I - --passthrough
The option ``--passthrough`` ensures that all data are passed to caps.
#. Stop caps after the playback and the evaluation are finished
#. Start caps and all other real-time data acquisition modules.
Procedure using seedlink
........................
1. Retrieve miniSEED data from CAPS archive using :ref:`capstool<capstool>`.
#. Sort miniSEED records by endtime using :ref:`scmssort`:
.. code-block:: sh
scmssort -E miniSEED_file > miniSEED_file_sorted
#. Activate msrtsimul and activate loadTimeTable in the seedlink configuration:
.. code-block:: sh
msrtsimul = true
plugins.chain.loadTimeTable = true
#. Configure the :term:`RecordStream` with seedlink:
.. code-block:: sh
recordstream = slink://localhost:18000
#. Start seedlink and restart the modules that use the RecordStream interface:
.. code-block:: sh
seiscomp update-config
seiscomp start seedlink
seiscomp restart scautopick scamp
#. Playback the sorted miniSEED data using msrtsimul:
.. code-block:: sh
msrtsimul -v miniSEED_file_sorted
#. Revert all changes after the playback.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,304 @@
.. _gradients:
***************
Color Gradients
***************
Overview
========
A number of pre-defined color gradients are available for gempa modules.
These gradients can be used when displaying color-coded values such as grids,
spectrograms or heat-maps. The following is an overview of pre-defined
gradients along with the name of the gradient which can be used in
configurations. The gradients are grouped into:
* :ref:`gradients-fullspectrum`
* :ref:`gradients-darkyellow`
* :ref:`gradients-blackandwhite`
* :ref:`gradients-singlecolor`
* :ref:`gradients-threecolor`
List of Gradients
=================
.. _gradients-fullspectrum:
Full spectrum
-------------
Gradients that use the full spectrum of colors.
.. _gradients-default:
Default
~~~~~~~
* Configuration name: :option:`Default`
.. figure:: media/gradients/Default.*
:width: 420 px
:alt: overview of the Default color-gradient
.. figure:: media/gradients/Default_.*
:width: 420 px
:alt: example spectrogram using the Default gradient
Example spectrogram using the "Default" color-gradient.
.. _gradients-spectrum:
Spectrum
~~~~~~~~
* Configuration name: :option:`Spectrum`
.. figure:: media/gradients/Spectrum.*
:width: 420 px
:alt: overview of the Spectrum color-gradient
.. figure:: media/gradients/Spectrum_.*
:width: 420 px
:alt: example spectrogram using the Spectrum gradient
Example spectrogram using the "Spectrum" color-gradient.
.. _gradients-circle:
Circle
~~~~~~
* Configuration name: :option:`Circle`
.. figure:: media/gradients/Circle.*
:width: 420 px
:alt: overview of the Circle color-gradient
.. figure:: media/gradients/Circle_.*
:width: 420 px
:alt: example spectrogram using the Circle gradient
Example spectrogram using the "Circle" color-gradient.
.. _gradients-darkyellow:
Dark to yellow
--------------
Gradients optically resembling fire with increased energy beeing displayed by
bright yellow tones.
.. _gradients-blackbody:
BlackBody
~~~~~~~~~
* Configuration name: :option:`BlackBody`
.. figure:: media/gradients/BlackBody.*
:width: 420 px
:alt: overview of the BlackBody color-gradient
.. figure:: media/gradients/BlackBody_.*
:width: 420 px
:alt: example spectrogram using the BlackBody gradient
Example spectrogram using the "BlackBody" color-gradient.
.. _gradients-inferno:
Inferno
~~~~~~~
* Configuration name: :option:`Inferno`
.. figure:: media/gradients/Inferno.*
:width: 420 px
:alt: overview of the Inferno color-gradient
.. figure:: media/gradients/Inferno_.*
:width: 420 px
:alt: example spectrogram using the Inferno gradient
Example spectrogram using the "Inferno" color-gradient.
.. _gradients-plasma:
Plasma
~~~~~~
* Configuration name: :option:`Plasma`
.. figure:: media/gradients/Plasma.*
:width: 420 px
:alt: overview of the Plasma color-gradient
.. figure:: media/gradients/Plasma_.*
:width: 420 px
:alt: example spectrogram using the Plasma gradient
Example spectrogram using the "Plasma" color-gradient.
.. _gradients-blackandwhite:
Black and white
---------------
For a maximum contrast there are these two gradients that just use black and
white.
.. _gradients-blackwhite:
BlackWhite
~~~~~~~~~~
* Configuration name: :option:`BlackWhite`
.. figure:: media/gradients/BlackWhite.*
:width: 420 px
:alt: overview of the BlackWhite color-gradient
.. figure:: media/gradients/BlackWhite_.*
:width: 420 px
:alt: example spectrogram using the BlackWhite gradient
Example spectrogram using the "BlackWhite" color-gradient.
.. _gradients-whiteblack:
WhiteBlack
~~~~~~~~~~
* Configuration name: :option:`WhiteBlack`
.. figure:: media/gradients/WhiteBlack.*
:width: 420 px
:alt: overview of the WhiteBlack color-gradient
.. figure:: media/gradients/WhiteBlack_.*
:width: 420 px
:alt: example spectrogram using the WhiteBlack gradient
Example spectrogram using the "WhiteBlack" color-gradient.
.. _gradients-singlecolor:
Single color
------------
The single color gradients have the same benefit as the
:ref:`black and white gradients <gradients-blackwhite>` and are of a high contrast.
.. _gradients-bluepurple:
BluePurple
~~~~~~~~~~
* Configuration name: :option:`BuPu`
.. figure:: media/gradients/BluePurple.*
:width: 420 px
:alt: overview of the BluePurple color-gradient
.. figure:: media/gradients/BluePurple_.*
:width: 420 px
:alt: example spectrogram using the BluePurple gradient
Example spectrogram using the "BluePurple" color-gradient.
.. _gradients-blues:
Blues
~~~~~
* Configuration name: :option:`Blues`
.. figure:: media/gradients/Blues.*
:width: 420 px
:alt: overview of the Blues color-gradient
.. figure:: media/gradients/Blues_.*
:width: 420 px
:alt: example spectrogram using the Blues gradient
Example spectrogram using the "Blues" color-gradient.
.. _gradients-purplered:
PurpleRed
~~~~~~~~~
* Configuration name: :option:`PuRd`
.. figure:: media/gradients/PurpleRed.*
:width: 420 px
:alt: overview of the PurpleRed color-gradient
.. figure:: media/gradients/PurpleRed_.*
:width: 420 px
:alt: example spectrogram using the PurpleRed gradient
Example spectrogram using the "PurpleRed" color-gradient.
.. _gradients-threecolor:
Three color
-----------
Gradients allowing to distinguish more details than
:ref:`single colored <gradients-singlecolor>`.
.. _gradients-redyellowblue:
RedYellowBlue
~~~~~~~~~~~~~
* Configuration name: :option:`RdYlBu`
.. figure:: media/gradients/RedYellowBlue.*
:width: 420 px
:alt: overview of the RedYellowBlue color-gradient
.. figure:: media/gradients/RedYellowBlue_.*
:width: 420 px
:alt: example spectrogram using the RedYellowBlue gradient
Example spectrogram using the "RedYellowBlue" color-gradient.
.. _gradients-parula:
Parula
~~~~~~
* Configuration name: :option:`Parula`
.. figure:: media/gradients/Parula.*
:width: 420 px
:alt: overview of the Parula color-gradient
.. figure:: media/gradients/Parula_.*
:width: 420 px
:alt: example spectrogram using the Parula gradient
Example spectrogram using the "Parula" color-gradient.
.. _gradients-viridis:
Viridis
~~~~~~~
* Configuration name: :option:`Viridis`
.. figure:: media/gradients/Viridis.*
:width: 420 px
:alt: overview of the Viridis color-gradient
.. figure:: media/gradients/Viridis_.*
:width: 420 px
:alt: example spectrogram using the Viridis gradient
Example spectrogram using the "Viridis" color-gradient.

View File

@ -0,0 +1,171 @@
.. _sec-caps-interface:
Server Interfaces
=================
.. TODO
Plugin Interface
----------------
buffer, acknowledgement of packet
.. _sec-caps-client-interface:
Client interface / telnet
-------------------------
:term:`CAPS` provides a line based client interface for requesting data and showing
available streams. The ``telnet`` command may be used to connect to the server:
.. Using telnet application to connect to a local :term:`CAPS` server
.. code-block:: sh
telnet localhost 18002
The following commands are supported by the server:
* ``HELLO`` - prints server name and version
* ``HELP`` prints descriptive help
* ``BYE`` - disconnects from server
* ``AUTH <user> <password>`` - performs authentication
* ``INFO STREAMS [stream id filter]`` - lists available streams and time spans,
see section \ref{sec:ci_info_streams}
* ``BEGIN REQUEST`` - starts a request block, followed by request parameters,
see section \ref{sec:ci_request}
- ``REALTIME ON|OFF`` - enables/disables real-time mode, if disabled the
connection is closed if all archive data was sent
- ``STREAM ADD|REMOVE <NET.STA.LOC.CHA>`` - adds/removes a stream from the
request, may be repeated in one request block
- ``TIME [<starttime>]:[endtime]`` - defines start and end time of the
request, open boundaries are allowed
- ``HELI ADD|REMOVE <NET.STA.LOC.CHA>`` - similar to STREAM but re-sample
the data to 1Hz
- ``META{@[version]} [ON|OFF]`` - request delivery of packet header information only
- ``OUTOFORDER [ON|OFF]`` - delivers data in order of sampling or transmission time
- ``SEGMENTS [ON|OFF]`` - request delivery of times of contiguous segments (API >= 4)
- ``GAPS [ON|OFF]`` - request delivery of times of gaps (API >= 4)
- ``TOLERANCE [value in us]`` - Define data segment continuation tolerance in microseconds (API >= 4)
- ``RESOLUTION [value in d]`` - Resolution in days of the returned data segments or gaps (API >= 4)
* ``END`` - finalizes a request and starts acquisition
* ``PRINT REQUESTS`` - prints active request of current session
* ``ABORT`` - aborts current data transmission
Requests to server are separated by a new line. For the response data the server
prepends the message length to the data. In this way non ASCII characters or
binary content can be returned.
.. _sec-ci-info-streams:
List available streams
~~~~~~~~~~~~~~~~~~~~~~
The following example shows a telnet conversation of a request for available
streams. The first line contains the request command. All other lines represent
the server response. The response is 124 characters long. The length parameter
is interpreted by telnet and converted to its ASCII representation, in this
case: ``|``.
.. code-block:: none
:linenos:
:emphasize-lines: 1
INFO STREAMS VZ.HILO.*
|STREAMS
VZ.HILO.WLS.CAM 2013-07-26T00:00:01 2013-08-02T09:28:17
VZ.HILO.WLS.SSH 2008-06-06T00:00:00 2013-08-02T09:04:00
END
.. _sec-ci-request-data:
Data requests
~~~~~~~~~~~~~
Data request are initiated by a request block which defines the stream and the
time span to fetch.
.. code-block:: none
:linenos:
:emphasize-lines: 1-5
BEGIN REQUEST
REALTIME OFF
STREAM ADD VZ.HILO.WLS.SSH
TIME 2013,08,02,09,00,02:
END
DSTATUS OK
SESSION_TABLE VERSION:1
PACKET_HEADER IDSIZE:2,DATASIZE:4
FREQUESTS
ID:1,SID:VZ.HILO.WLS.SSH,SFREQ:1/60,UOM:mm,FMT:RAW/FLOAT
END
[unprintable data]
'REQUESTS
ID:-1,SID:VZ.HILO.WLS.SSH
END
<20>EOD
The listing above shows such a request block in lines 1-5. Line 2 disables the
real-time mode which will close the session after all data was read. Line 3 adds
the stream to the request set. More streams may be added in successive lines.
Line 4 specifies a start time and an open end time.
The first response chunk starts at line 6 and ends at line 11. I has a length of
68 byte (= ACCII ``D``) and contains version information and a session table.
The table maps a 2 byte integer id to data stream meta information. In this way
following data chunks can be distinguished by only 2 bytes and the header
information has to be transmitted only once.
Line 12 contains the data chunks. It is omitted here because it contains
unprintable characters. A data chunk starts with the 2 id bytes followed by the
4 byte chunk size.
After all data was transmitted the server reports the end of the stream (line
13-15) and the end of the session (line 16).
.. _sec-caps-web-interface:
Web interface
-------------
:term:`CAPS` ships with a read-only web interface for viewing information and
downloading data. The Web interface is disabled by default and may be enabled by
configuring a valid port number under :confval:`AS.http.port`. It provides
different tabs:
* **Channels** (figure :num:`fig-web-streams`): View and filter available
streams. For the RAW and miniSEED packets it is also possible to view the
waveform data by clicking on an entry of the stream table. miniSEED data
may be :ref:`downloaded interactively <sec-caps-web>`.
* **Server stats** (figure :num:`fig-web-overview`): View server traffic
statistics. This tab is only accessible if the logged in user is member of the
*admin* group as defined by :confval:`AS.auth.basic.users.passwd`.
* **Settings** (figure :num:`fig-web-settings`): Set the login
credentials and parameters of the other perspectives.
.. _fig-web-streams:
.. figure:: media/web_streams.png
:width: 18cm
Stream perspective of :term:`CAPS` Web interface allowing to filter availability
streams and to view waveform data for RAW and minSEED records.
.. _fig-web-overview:
.. figure:: media/web_overview.png
:width: 18cm
Overview perspective of :term:`CAPS` Web interface showing traffic and file
statistics.
.. _fig-web-settings:
.. figure:: media/web_settings.png
:width: 18cm
Settings perspective of :term:`CAPS` Web interface.

View File

@ -0,0 +1,160 @@
.. _sec-intro:
Introduction
============
The Common Acquisition Protocol Server (|appname|) was developed to fulfill
the needs to transfer multi-sensor data from the station to the data center. As
nowadays more and more stations with co-located sensors like broadband
seismometer, accelerometer, CGPS, temperature, video cameras, etc. are build up,
a acquisition protocol is required, which can efficiently handle low- and
high-sampled data through one unified protocol.
|appname| is a core component of |scname| systems where data redundancy, security
and high availability is key.
The |appname| package ships with
* The :ref:`CAPS server <sec-caps-config>` serving :term:`miniSEED <miniSeed>`
and other multi-format data in real-time and from archive.
* :ref:`Data acquisition plugins <sec-caps-plugins>` feeding data into the CAPS
server.
* :ref:`Data retrieval and analysis tools <sec-caps-retrieval>` including
on-demand access to data on the server.
Features
--------
The core features of |appname| are:
* Multi-sensor data transfer including miniSEED records, video streams and
:ref:`almost any other format <sec-packet-types>`.
* Compehensive data acquisition by all existing plugins for :cite:t:`seedlink`
plus additional :ref:`CAPS plugins <sec-caps-plugins>`.
* No inventory required for immediate data acquisition.
* Stations can be added without reconfiguring the |appname| server avoiding
server downtimes.
* Pushing of data from new stations into CAPS without restarting the CAPS server.
* Lightweight protocol for minimized packet overhead.
* Reliable data transfer, no data loss due to re-transmission of data in case of
network outage or server restart.
* Archived and real-time data served through one single protocol and one
connection.
* High-quality data archives:
* backfilling of data and correct sorting by time even if records arrive in
out of order sequences.
* duplicate records in CAPS archives are impossible. Such duplicates may exist
in :term:`SDS` archives created by :cite:t:`scart` or :cite:t:`slarchive`.
* Rapid response systems are supported by prioritizing recent data when
recovering from longer gaps of data acquisition allowing to process the most
recent data first before backfilling older data.
* :ref:`Data security <sec-caps-security>` on multiple levels:
* secure communication via :ref:`SSL <sec-conf-ssl>`.
* :ref:`User authentication <sec-conf-access-auth>`.
* different user and group roles distinguishing read, write or administrative
access.
* :ref:`fine-grained access control <sec-conf-access>` on service and stream
level for defined users, user groups or IP ranges.
* :ref:`Data redundancy <caps2caps>` by real-time connection between two or more
CAPS servers.
* Easy :ref:`access to data <sec-caps-retrieval>`:
* via the :ref:`caps RecordStream <sec-caps-recstream>` provided by
|scname| :cite:p:`seiscomp`
* using :ref:`added tools and interfaces <sec-caps-retrieval>` also offering
meta data information.
* via :ref:`Seedlink <sec-caps-seedlink>`.
* by built-in standard :ref:`FDSN Web Service <sec-caps-fdsnws>`.
* by built-in :ref:`Winston Wave Server, WWS <sec-caps-wws>`, e.g.,
to :cite:t:`swarm` by USGS.
* by an interactive :ref:`Web interface <sec-caps-web-interface>` also
offering statistics and meta data information.
* from :ref:`other CAPS servers <caps2caps>`.
* through :ref:`telnet interface <sec-caps-client-interface>`.
* Server-side downsampling upon client request for optimized data transfer.
.. _sec-architecture:
Architecture
------------
The figure below shows the architecture of :term:`CAPS`. The central
component is the server, which receives data from sensors or other data centers,
stores it into an archive and provides it to connected clients. The connection
between a data provider and :term:`CAPS` is made through a plugin.
Plugins are independent applications which, similar to clients, use a
network socket to communicate with the server. The advantages of this loose
coupling are:
* Plugins may be developed independently and in a arbitrary programming language.
* A poorly written plugin does no crash the whole server.
* Plugins may run on different machines to pull or push data. This allows to secure
the access to the |appname| by a firewall.
* Plugins may buffer data in case the server is temporary unavailable.
* A |appname| client library for C++ and Python may be provided upon request
allowing you to develop your own applications.
.. _fig-architecture:
.. figure:: media/architecture.*
:width: 16cm
Architecture of |appname|.
.. _sec-deploy:
Deployment
----------
The acquisition of data from other data centers is most likely done through a
public interface reachable over the Internet. For instance seismic waveform data
is commonly distributed via :term:`SeedLink` or :term:`ArcLink` servers while
the tide gage community shares its data through a Web interface. For this
center-to-center communication a plugin is launched on the receiving site to
feed the :term:`CAPS` server.
For the direct acquisition of data from a sensor the plugin has to run on the
sensor station. At this point the diagram distinguishes two cases: In the first
example the plugin sends the data directly to the :term:`CAPS` running at the
data center. In the second case the data is send to a local CAPS server on the
sensor station. From there it is fetch by the :ref:`caps2caps` plugin running
at the data center.
The options for possible deployments are illustrated in the figure below.
The advantage of the second approach is:
* **Better protection against data loss** - In case of a connectivity problem
plugins may transient buffer data. Nevertheless main memory is limited and
the buffered data may be lost e.g. because of an power outage. A local
:term:`CAPS` will store observations to the hard drive for later retrieval.
* **Direct client access** - A client may directly receive data from the sensor
station. This is in particular useful for testing and validating the sensor
readings during the station setup phase. The standard :term:`CAPS` client
applications may be used in the field.
* **Less packet overhead** - The :term:`CAPS` client protocol is more
lightweight than the plugin protocol. Once connected each data stream is
identified by a unique number. A client packet only consists of a two byte
header followed by the data.
.. _fig-deployment:
.. figure:: media/deployment.*
:width: 16cm
Possible deployment of |appname| and its components.
The ability to connect different :term:`CAPS` instances simplifies sharing of
data. One protocol and one implementation is used for the sensor-to-center and
center-to-center communication. In the same way multiple :term:`CAPS` instances
may be operated in one data center on different hardware to create backups,
establish redundancy or balance the server load.

View File

@ -0,0 +1,136 @@
.. _sec-caps-plugins:
Data Acquisition and Manipulation by Plugins
============================================
While the :ref:`caps server <sec-caps-server>` serves data in basically any
given format in real time and from archive, the data are fetched from external
sources and provided to caps by acquisition plugins.
The acquisition plugins generate or retrieve data and provide them to the
:ref:`caps server <sec-caps-server>` for storage and further provision to other
clients. Depending on the plugin itself the acquisition plugins allow to
* Flexibly connect to many different sources and fetch data in many different formats
* Manipulate data streams
* Generate new data.
In addition to the :ref:`plugins specifically provided by the cap-plugins package<sec-caps-acqui-plugins>`
all available :ref:`seedlink plugins<sec-seedlink-plugins>` can be considered.
CAPS also provides tools for :ref:`retrieval of data<sec-caps-retrieval>` from
CAPS servers and delivery to :program:`seedlink` or to files.
.. _fig-deployment-plugins:
.. figure:: media/deployment.*
:width: 12cm
:align: center
Possible data feeds by plugins into |appname|.
.. _sec-caps-acqui-plugins:
|appname| plugins
-----------------
Since |appname| was brought to the market more and more plugins have been
developed in addition to the :ref:`Seedlink plugins <sec-seedlink-plugins>`.
While many of these |appname| plugins are included in the *caps-plugins* package by |gempa|,
other plugins can be provided upon request. We can also develop new customized plugins.
`Contact us <https://gempa.de/contact/>`_ and provide your specifications for new
developments.
.. note ::
While the :ref:`Seedlink plugins <sec-seedlink-plugins>` are selected and
configured in the bindings configuration of :program:`seedlink`,
the CAPS plugins run independently.
Therefore, they are enabled and started as daemon modules or executed on demand.
These plugins are configured like a module by module configuration.
Included plugins
~~~~~~~~~~~~~~~~
The following plugins ship with the *caps-plugins* package:
.. csv-table::
:header: "Plugin name", "Description"
:widths: 15,100
":ref:`caps2caps`","Mirrors data between different :term:`CAPS` instances. All packet types are supported."
":ref:`crex2caps`","CREX CAPS plugin. Reads CREX data from file and pushes the data into the given CAPS server."
":ref:`gdi2caps`","Import data from Guralp GDI server."
":ref:`rtpd2caps`","Collects data from a RefTeK `RTPD server <http://www.reftek.com/products/software-RTPD.htm>`_. The data is stored in the :ref:`sec-pt-raw` format."
":ref:`data2caps`","Send data to CAPS as :ref:`RAW format <sec-pt-raw>`. This simple tool can be easily extended to read custom formats and to send :ref:`miniSEED format <sec-pt-miniseed>`."
":ref:`rs2caps`","Collects data from a |scname| :term:`RecordStream`. The data is either stored in the :ref:`sec-pt-raw` or :ref:`sec-pt-miniseed` format."
":ref:`slink2caps`","Uses the available :term:`SeedLink` plugins to feed data from other sources into :term:`CAPS`. Data can be retrieved from any sources for which a :term:`SeedLink` plugins exists. The data will be converted into :ref:`sec-pt-miniseed` format."
":ref:`sproc2caps`","Real-time combination, renaming, filtering and manipulation of data from the recordstream interface."
":ref:`test2caps`","Send generic tests signals to a CAPS server"
":ref:`v4l2caps`","Video for Linux capture plugin"
":ref:`win2caps`","WIN CAPS plugin. Sends data read from socket or file to CAPS."
Plugins provided upon request
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following plugins are not included in the *caps-plugins* package. They can be
provided upon request.
.. csv-table::
:header: "Plugin name", "Description"
:widths: 15,100
":ref:`edas2caps`","Reads EDAS sensor data directly from station and pushes it into CAPS."
":ref:`jw2caps`","Provide support for Joy Warrior Mem sensors connected by USB."
":ref:`ngl2caps`","Reads GNSS data from file and sends it to CAPS."
":ref:`orb2caps`","Provides miniSEED data from an Antelope ORB. Operates on the Antelope system."
":ref:`tidetool2caps`","Reads TideTool decoded data from file and pushes it into CAPS."
":ref:`yet2caps`","Read GNSS YET packets from USB device or from file and push them into CAPS."
.. _sec-seedlink-plugins:
Integration of Seedlink plugins
-------------------------------
|appname| supports all available
`seedlink plugins <https://docs.gempa.de/seiscomp/current/apps/seedlink.html>`_.
The plugins are configured with the seedlink bindings configuration. Once configured
they are executed together with :ref:`slink2caps`.
Procedure:
#. In the module configuration of seedlink (:file:`seedlink.cfg`) set
.. code-block:: sh
plugins.chain.loadTimeTable = false
#. Create and configure the seedlink bindings
#. Update configuration
.. code-block:: sh
seiscomp update-config seedlink
#. Start/restart :ref:`slink2caps`, do not start the configured seedlink instead
at the same time.
.. code-block:: sh
seiscomp restart slink2caps
.. _sec-caps-plugins-docu:
Configuration of |appname| plugins
----------------------------------
.. toctree::
:maxdepth: 2
:glob:
/apps/*2caps

View File

@ -0,0 +1,5 @@
Data Redundancy
===============
Data redundancy between 2 |appname| servers in real-time can be achieved by
:ref:`caps2caps`.

View File

@ -0,0 +1,4 @@
References
==========
.. bibliography::

View File

@ -0,0 +1,46 @@
.. _sec-caps-server-testing:
Server and File Testing
=======================
Command-line options support you in testing the configuration of *caps* servers.
Read the help for a complete list or options:
.. code-block:: sh
caps -h
For a general configuration test run *caps* on the command line with
``--configtest``:
.. code-block:: sh
caps --configtest
Access configuration
--------------------
For testing the access from a specfic IP run *caps* on the command line with
``--print-access``. You may also test connections secured by user name and
password:
.. code-block:: sh
caps --print-access GE.*.*.* 127.0.0.1
caps --print-access GE.APE.*.* --user gempa:gempa 127.0.0.1
Server operation
----------------
You may also test the functioning of the caps server using :ref:`capstool`, the
:ref:`telnet interface <sec-caps-client-interface>` or the
:ref:`web interface <sec-caps-web-interface>`.
File testing
------------
Files stored by the server in the :ref:`caps archive <sec-archive>` may be
directly tested using :ref:`rifftool` and retrieved using :ref:`capstool`.

View File

@ -0,0 +1,24 @@
.. _sec-caps-server:
CAPS Server Application
=======================
The |appname| server provides data in multiple formats from different sensor
types, e.g. :term:`miniSEED`, GNSS, meteorological devices, video, etc. to
clients such as standard |scname| :term:`modules <module>`. The data can be
natively fed into and provided by the |appname| server in different flexible
ways.
* **Data acquisition:** read the section on :ref:`sec-caps-plugins`.
* **Data retrieval:** read the section :ref:`sec-caps-retrieval`.
For setting up and running a |appname| server, consider the following sections
.. toctree::
:maxdepth: 2
/base/archive
/base/configuration
/base/interfaces
/base/server-testing
/base/redundancy

View File

@ -0,0 +1,81 @@
.. _sec-caps-upgrading:
Upgrading
=========
New file format
---------------
Starting from version 2021.048 CAPS introduces a new file storage format.
Actually the files are still compatible and chunk based but two new chunk types
were added. The upgrade itself should run smoothly without interruption but due
to the new file format all files must be converted before they can be read.
CAPS will do that on-the-fly whenever a file is opened for reading or writing.
That can cause performance drops until all files have been converted. But it
should not cause any outages.
Rationale
---------
The time to store an out-of-order record in CAPS increased the more records
were stored already. This was caused by a linear search of the insert position.
The more records were stored the more records had to be checked and the more
file content had to be paged in system memory which is a slow operation.
In addition a second index file had to be maintained which requires an additional
open file descriptor per data file. As we also looked for way to reduce
disc fragmentation and to allow file size pre-allocation on any operating system
we decided to redesign the way how individual records are stored within a data
file. What we wanted was:
* Fast insert operations
* Fast data retrieval
* Portable file size pre-allocations
* Efficient OS memory paging
CAPS now implements a B+tree index per data file. No additional index file is
required. The index is maintained as additional chunks in the data file itself.
Furthermore CAPS maintains a meta chunk at the end of the file with information
about the logical and pyhsical file size, the index chunks and so on. If that
chunk is not available or is not valid then the data file will be re-scanned
and converted. This is what actually happens after an upgrade.
As a consequence, time window requests will be much faster with respect to
CPU time. Also file accesses are less frequent and reading file content overhead
while extracting arbitrary time windows is less than before.
As the time range stored in the data file is now part of the meta data a full
re-scan is not necessary when restarting CAPS without its archive log. When
dealing with many channels it will speed up re-scanning an archive a lot.
Manual archive conversion
-------------------------
If a controlled conversion of the archive files is desired then the following
procedure can be applied:
1. Stop caps
.. code-block:: sh
$ seiscomp stop caps
2. Enter the configured archve directory
.. code-block:: sh
$ cd seiscomp/var/lib/caps/archive
3. Check all files and trigger a conversion
.. code-block:: sh
$ find -name *.data -exec rifftest {} check \;
4. Start caps
.. code-block:: sh
$ seiscomp start caps
Depending on the size of the archive step 3 can take some time.