Skip to content

Configure Automation Manager Deployment#

🔒 AutomationManager.Edit

Overview#

This operation is used to configure the deployment of an existing Automation Manager and change the current settings through the GUI.

Note

This is only supported in containerized kubernetes installations.

Setup#

No specific setup is required other than to meet the preconditions of the transaction.

Preconditions#

  • The Automation Manager exists in the system.
  • The Automation Manager has the Deployment Mode set as AutomaticDeploy.

Sequence of Steps#

  1. Accessing the Automation Manager Deployment Configuration can be done by selecting the Deployment Configuration button on the top ribbon. A dialog will open with a text editor with syntax highlighting for JSON that will allow the user to perform the necessary changes directly in the configuration settings.
  2. Optionally, the user can save the current editor buffer as a template by selecting the Set Configuration as Template button. This template can be reloaded at a future time by selecting the Reload Template Configuration.

    Warning

    Saving a new template will overwrite the existing template in the configuration entry /Cmf/System/Configuration/ConnectIoT/TemplateConfiguration. If the user reloads the configuration from the template, the current value of this configuration field will be used.

  3. Select Save and Close to complete the transaction and the new settings will be saved.

configure_automation_manager_deploy

Configuration#

The deployment configuration has five main configuration entries: - ImageName (automatically filled) - This is automatically set and synced with the chosen Automation Manager package selected - UserName (automatically filled) - Setting filled when selecting an integration user to accept this deployment - UserAccount (automatically filled) - Setting filled when selecting an integration user to accept this deployment - Routes - Configure HTTP(s) routes, the user wishes to configure for external access to the Automation Manager - Volumes - Configure File mounting points or shares the user may want available to the manager

Info

The user name and account are filled when changing the Manager to Ready, they cannot be changed manually.

Setting up a Route#

In order to add a Route to the Deployment configuration, add to the JSON array Routes a new object. The available fields are the following:

  • Protocol
  • Port that should be open
  • Flag to indicate if TLS is enabled
Key PossibleValues
Protocol TCP/ HTTP
Port any integer in available port range
IsTLSEnabled true / false

Table: Possible values for Route in Deployment configuration

Example HTTP Route#

{
    "Routes": [
        {
            "Protocol": "HTTP",
            "Port": 5001,
            "IsTLSEnabled": false
        }
    ],
    "Volumes": [],
    "ImageName": "criticalmanufacturing/connectiot:development",
    "UserName": "exampleUser",
    "UserAccount": "EXAMPLEUSER"
}

With this configuration one could now open a new REST Server in HTTP that is listening for the port 5001. In order for a client to perform requests, the nomenclature for the opened route is <port>.<manager-name>.<work-pool>.iot.<environment-address>. For example, the HTTP client would need to query the endpoint at 5001.examplemanager.general.iot.myenvkubernetes.

Example TCP-IP Route#

{
    "Routes": [
        {
            "Protocol": "TCP",
            "Port": 5001,
            "IsTLSEnabled": false
        }
    ],
    "Volumes": [],
    "ImageName": "criticalmanufacturing/connectiot:development",
    "UserName": "exampleUser",
    "UserAccount": "EXAMPLEUSER"
}

Note

For TCP-IP, it will require manual user configuration in the kubernetes cluster.

With this configuration, there will now be a new Cluster IP Service, which will open the port between different Pods in the cluster but will not expose it to the outside world. Exposing the port requires manual configuration, where the administrator opens a NodePort or configures the IngressController to forward the TCP-IP traffic to the created Kubernetes Service.

Setting Up a Volume#

The following volume types are available for the IoT volume configurations in the DevOps Center:

  • PV: Utilizes existing Persistent Volumes defined within your Kubernetes cluster.
  • StorageClass: Utilizes an existing StorageClass defined within your Kubernetes cluster.

Volume Configuration#

When configuring a new volume you may need to specify the following attributes:

  • Type: The type of volume (required for all volume types).
  • Name: A unique name for the volume (required for PV volumes).
  • MountPath: The path within the container where the volume will be mounted (required for all volume types).
  • StorageClass: The storage class name.

The following table presents the mandatory attributes per volume type:

Attribute PV StorageClass
Type Required Required
Name Required -
StorageClass - Required
MountPath Required Required

Table: Mandatory attributes per volume type

Example Volume Configurations#

  • PV Volume:

    {
        "Type": "PV",
        "Name": "pv-smb-test",
        "MountPath": "/opt/iot/test"
    }
    
  • StorageClass Volume:

    {
        "Type": "StorageClass",
        "StorageClass": "smb-sc", 
        "MountPath": "/opt/iot/data"
    }
    

With these configurations:

  • The PV volume named pv-smb will be mounted to the /opt/iot/test path within the container.
  • A PVC using the crc-csi-hostpath-provisioner StorageClass will provision a Persistent Volume, which will be mounted to the /opt/iot/data path within the container.

You can then use these MountPath values with the File driver or other components that interact with the file system within your IoT automation manager container.

Exposing Ports#

To expose a TCP port from the Automation Manager running in a Kubernetes cluster, one option is to create a Service of type NodePort ⧉. This exposes the specified port on each node of the cluster, allowing external access.

Below is a sample Service manifest. Replace the placeholders - marked as [TOKENS] - with values relevant to your Automation Manager deployment:

apiVersion: v1
kind: Service
metadata:
  name: [SERVICE-NAME]
  namespace: [NAMESPACE]
spec:
  type: NodePort
  selector:
    app.kubernetes.io/name: [AUTOMATIONMANAGER-LABEL-NAME] # Use the label from the Automation Manager deployment
  ports:
    - port: [AUTOMATIONMANAGER-PORT-TO-EXPOSE]
      targetPort: [AUTOMATIONMANAGER-PORT-TO-EXPOSE]
      protocol: TCP
      # Optional: specify a static NodePort (default range: 30000–32767)
      nodePort: [NODEPORT-TO-BE-USED]

Network Policy Considerations#

The CM namespaces are secured using NetworkPolicies ⧉, which only allow ingress traffic through the Ingress Controller (Traefik). This setup enforces authentication and restricts external access to predefined routes.

To allow direct access to the Automation Manager via the NodePort, you must define a NetworkPolicy ⧉ that explicitly permits ingress traffic. Below is an example configuration:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: [NETWORKPOLICY-NAME]
  namespace: [NAMESPACE]
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/name: [AUTOMATIONMANAGER-LABEL-NAME] # Use the label from the Automation Manager deployment
  ingress:
    - from:
        - namespaceSelector: {}  # Allows traffic from all namespaces (adjust as needed)
  policyTypes:
    - Ingress

Warning

Be cautious when relaxing network policies. Ensure this access is necessary and properly secured, especially when exposing internal services externally.