Configure Automation Manager Deployment#
AutomationManager.Edit
Overview#
This operation is used to configure the deployment of an existing Automation Manager and change the current settings through the GUI.
Note
This is only supported in containerized kubernetes installations.
Setup#
No specific setup is required other than to meet the preconditions of the transaction.
Preconditions#
- The Automation Manager exists in the system.
-
The Automation Manager has the Deployment Mode set as AutomaticDeploy.
Warning
We recommend that Automation Manager names do not exceed 63 characters to comply with Kubernetes naming limits. If the name is longer, the deployment name will be automatically shortened using the format: prefix (16 chars) + MD5 hash (32 chars) + suffix (15 chars).
Sequence of Steps#
- Accessing the Automation Manager Deployment Configuration can be done by selecting the Deployment Configuration button on the top ribbon. A dialog will open with a text editor with syntax highlighting for JSON that will allow the user to perform the necessary changes directly in the configuration settings.
-
Optionally, the user can save the current editor buffer as a template by selecting the Set Configuration as Template button. This template can be reloaded at a future time by selecting the Reload Template Configuration.
Warning
Saving a new template will overwrite the existing template in the configuration entry
/Cmf/System/Configuration/ConnectIoT/TemplateConfiguration. If the user reloads the configuration from the template, the current value of this configuration field will be used. -
Select Save and Close to complete the transaction and the new settings will be saved.
Configuration#
The deployment configuration has six main configuration entries: - ImageName (automatically filled) - This is automatically set and synced with the selected Automation Manager package. - UserName (automatically filled) - Filled when selecting the integration user that accepts the deployment. - UserAccount (automatically filled) - Filled when selecting the integration user that accepts this deployment. - Routes - Defines the HTTP or HTTPS routes configured for external access to the Automation Manager. - Pod Resources - Defines the computing resources (CPU and memory) that a Pod can request and use within the Kubernetes cluster. - Volumes - Defines file mounting points or shared directories made available to the Automation Manager.
Info
The user name and account are filled when changing the Automation Manager to Ready, they cannot be changed manually.
Setting up a Route#
In order to add a Route to the Deployment configuration, add to the JSON array Routes a new object. The available fields are the following:
- Protocol
- Port that should be open
- Flag to indicate if TLS is enabled
| Key | PossibleValues |
|---|---|
| Protocol | TCP/ HTTP |
| Port | any integer in available port range |
| IsTLSEnabled | true / false |
Table: Possible values for Route in Deployment configuration
Example HTTP Route#
{
"Routes": [
{
"Protocol": "HTTP",
"Port": 5001,
"IsTLSEnabled": false
}
],
"Volumes": [],
"ImageName": "criticalmanufacturing/connectiot:development",
"UserName": "exampleUser",
"UserAccount": "EXAMPLEUSER"
}
With this configuration one could now open a new REST Server in HTTP that is listening for the port 5001. In order for a client to perform requests, the nomenclature for the opened route is <port>.<manager-name>.<work-pool>.iot.<environment-address>. For example, the HTTP client would need to query the endpoint at 5001.examplemanager.general.iot.myenvkubernetes.
Example TCP-IP Route#
{
"Routes": [
{
"Protocol": "TCP",
"Port": 5001,
"IsTLSEnabled": false
}
],
"Volumes": [],
"ImageName": "criticalmanufacturing/connectiot:development",
"UserName": "exampleUser",
"UserAccount": "EXAMPLEUSER"
}
Note
For TCP-IP, it will require manual user configuration in the kubernetes cluster.
With this configuration, there will now be a new Cluster IP Service, which will open the port between different Pods in the cluster but will not expose it to the outside world. Exposing the port requires manual configuration, where the administrator opens a NodePort or configures the IngressController to forward the TCP-IP traffic to the created Kubernetes Service.
Setting Pod Resources Requirements and Limits#
To define the computing resources that a Pod can request and use, add a Resources object to the deployment configuration. This object specifies both requests (the minimum resources guaranteed for the Pod) and limits (the maximum resources the Pod can consume).
The following resource types are available:
- Memory – Defines the amount of RAM allocated in the Kubernetes cluster.
- CPU – Defines the amount of processing time allocated in the Kubernetes cluster.
Pod Resources Configuration#
When defining Pod resources, the following attributes can be specified under Resources:
- Requests – The minimum amount of CPU and memory required to start and run the Pod. The scheduler uses these values to select an appropriate node.
- Limits – The maximum CPU and memory the Pod can consume. If usage exceeds these values, Kubernetes may throttle CPU usage or terminate the Pod.
Example Pod Resource Configurations#
{
"Resources": {
"Requests": {
"Memory": "1G",
"CPU": "1"
},
"Limits": {
"Memory": "2G",
"CPU": "1.5"
}
}
}
With these configurations:
- The Automation Manager defines the Pod’s resource requirements, which must be met by the cluster before the Pod can start.
- The Pod cannot exceed the specified limits. If it does, Kubernetes may stop or restart it depending on cluster settings.
- Requests and limits are optional and can be used independently, though defining both helps maintain performance and resource stability.
Info
More details about setting Pod resource requests and limits, including the available CPU and memory units, can be found in the official Kubernetes documentation: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Setting Up a Volume#
The following volume types are available for the IoT volume configurations in the DevOps Center:
- PV- Utilizes existing Persistent Volumes defined within your Kubernetes cluster.
- StorageClass- Utilizes an existing StorageClass defined within your Kubernetes cluster.
Volume Configuration#
When configuring a new volume you may need to specify the following attributes:
- Type - The type of volume (required for all volume types).
- Name - A unique name for the volume (required for PV volumes).
- MountPath - The path within the container where the volume will be mounted (required for all volume types).
- StorageClass - The storage class name.
The following table presents the mandatory attributes per volume type:
| Attribute | PV | StorageClass |
|---|---|---|
| Type | Required | Required |
| Name | Required | - |
| StorageClass | - | Required |
| MountPath | Required | Required |
Table: Mandatory attributes per volume type
Example Volume Configurations#
-
PV Volume:
-
StorageClass Volume:
With these configurations:
- The PV volume named
pv-smbwill be mounted to the/opt/iot/testpath within the container. - A PVC using the
crc-csi-hostpath-provisionerStorageClass will provision a Persistent Volume, which will be mounted to the/opt/iot/datapath within the container.
You can then use these MountPath values with the File driver or other components that interact with the file system within your Automation Manager container.
Exposing Ports#
To expose a TCP port from the Automation Manager running in a Kubernetes cluster, one option is to create a Service of type NodePort ⧉. This exposes the specified port on each node of the cluster, allowing external access.
Below is a sample Service manifest. Replace the placeholders - marked as [TOKENS] - with values relevant to your Automation Manager deployment:
apiVersion: v1
kind: Service
metadata:
name: [SERVICE-NAME]
namespace: [NAMESPACE]
spec:
type: NodePort
selector:
app.kubernetes.io/name: [AUTOMATIONMANAGER-LABEL-NAME] # Use the label from the Automation Manager deployment
ports:
- port: [AUTOMATIONMANAGER-PORT-TO-EXPOSE]
targetPort: [AUTOMATIONMANAGER-PORT-TO-EXPOSE]
protocol: TCP
# Optional: specify a static NodePort (default range: 30000–32767)
nodePort: [NODEPORT-TO-BE-USED]
Network Policy Considerations#
The CM namespaces are secured using NetworkPolicies ⧉, which only allow ingress traffic through the Ingress Controller (Traefik). This setup enforces authentication and restricts external access to predefined routes.
To allow direct access to the Automation Manager via the NodePort, you must define a NetworkPolicy ⧉ that explicitly permits ingress traffic. Below is an example configuration:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: [NETWORKPOLICY-NAME]
namespace: [NAMESPACE]
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: [AUTOMATIONMANAGER-LABEL-NAME] # Use the label from the Automation Manager deployment
ingress:
- from:
- namespaceSelector: {} # Allows traffic from all namespaces (adjust as needed)
policyTypes:
- Ingress
Warning
Be cautious when relaxing network policies. Ensure this access is necessary and properly secured, especially when exposing internal services externally.
