Monitor and Control
The control panel allows you to configure the global settings for your data pod. Here you can also monitor which processes have been executed and/or are currently running on your platform.
Last updated
The control panel allows you to configure the global settings for your data pod. Here you can also monitor which processes have been executed and/or are currently running on your platform.
Last updated
To inspect the processes running within your data pod and modify the global settings, go to the Control panel.
You go straight to the control panel overview:
The graphical overview is structured in a way similar to the way the import panel has been structured. Here, however, the time axis shows the real time indicated by the red vertical line. The overview is shifting to the left as time passes.
Each stripe in the graphical overview represents a table.
Another view that gives you insight into the events in the system is the PACKAGE_LOG view. This view gives you all the packages that have been loaded into a Raw Table.
The Automation Manager is a complex dependency observer algorithm that monitors the state of the system and triggers execution if new data arrives.
You can activate or deactivate the Automation Manager in the automation panel.
From within the overview panel, click on Automation:
You are redirected to the automation editor:
The automation editor checks if data packages should be processed according to their free flow settings.
If the automation manager is off, only manual flow (Execute) is possible.
This setting has no effect on IoT imports.
If you have a large number of tiny data packages as is common if you source data from an IoT import or a web import, the PACKAGE_LOG can become very large. This may slow down the Automation Manager.
Basic access privileges for a data pod can be managed in the Account Management settings. There you can assign the roles Viewer, Reporter, Developer, and Admin.
The Kill menu gives administrators an overview of all currently running processes.
From within the overview panel, click on Kill:
The Kill option has been designed for scenarios such as the following: A user may have initiated processes in the database that are still running in the background. For example, a user is executing a long-running query in a workbook.
The list also contains processes that the browser app runs against the database, e.g. fetching the contents of a table to display in a grid in the front end.
With the help of the Kill button, the administrator can inspect the process and request that the database abort that process:
Note that aborting the process may take some time.
Following your abort request, all table transactions initiated by the process will be rolled back. The database will be restored to its initial state before the process initiation.
You can inspect transactions that block each other from finishing their respective tasks in the second list of this table. If you select a row from the list, you can inspect which processes are mutually blocking each other and kill the blocking process.
An automatic lock timeout kills the blocking process after a certain amount of seconds.
From within the overview panel, click on External Access:
You are redirected to the External Access editor. The editor has two sections: API Access and Direct Database Access.
Here you can create authentication keys for external tools to access workbook card data.
By assigning workbook cards to the key, this data will be externally accessible via get requests.
You can create access keys that allow you to issue REST requests against the report tables and workbook card tables in a data pod.
From the list of available keys, click on the edit icon to view and/or modify the key settings:
Click on Add Key to specify the access control permissions per key:
Here you can specify a key name, a valid from - valid to interval, as well as a minimum request interval.
See our API documentation for more information on how to use these keys.
Enabling Direct Access allows you to access the underlying PostgreSQL database of a data pod with other tools.
Click on Add User within the panel shown below:
To get started, create a username and a password to enable the direct access.
If you enable Direct Access, a port will be reserved for the data pod on our dedicated secure access server. You can configure native PostgreSQL access on the given server address together with the given port, user and password.
This enables you to use your favorite tools together with the platform. For example, you can configure a PostgreSQL source in your favorite BI tool (e.g. PowerBI, Tableau or Looker) or you use editors such as pgAdmin, SQuirreL or Zeppelin.
Note that even though access to the data pod is secured by a password, the transmitted data is not secured using secure certificates. Contact us if you want to use certificates for your data pod access.
To find out more, go to the section API Access.
If you have any questions at this point or if you encounter any issues, do not hesitate to get in touch with our support team.
Column | Datatype | Description |
---|---|---|
package_id
bigint
The unique id of a data package imported into a Raw Table
raw_table
text
The unique id of the Raw Table if the package is imported
min_date
text
Rule to get the package scope start date (as designated in the Raw Table settings)
max_date
text
Rule to get the package scope end date (as designated in the Raw Table settings)
file_name
text
In case of file packages, the name of the imported file
file_size
bigint
In case of file packages, the size of the imported file
num_columns
integer
Number of the columns of the package
num_rows
integer
Number of the rows of the package
filename_as_column
boolean
If the file name will be used as an extra column in the Raw Table
create_timestamp
timestamptz
Timestamp when the package has been created (before the load)
delete_timestamp
timestamptz
Timestamp indicating the package has been deleted or NULL