IANN FileGPS OpenShift Installation Guide

Introduction to the Guide

This installation guide provides detailed, step-by-step instructions for setting up the IANN (Intelligent Artificial Neural Network) platform and its associated components. The purpose of this guide is to ensure a smooth and error-free installation process, enabling users to quickly configure the environment and start leveraging the system’s capabilities.

What is IANN?

IANN (Intelligent Artificial Neural Network) is Pragma Edge’s AI-powered unified platform designed to bring intelligence, automation, and predictive analytics into business operations. It combines file tracking, monitoring, and AI-driven insights to ensure end-to-end visibility, operational efficiency, and proactive issue resolution

IANN plays a key role in driving digital transformation across industries by turning traditional data exchanges into intelligent, insight-driven processes.

IANN is built as a modular system with three primary components:

  1. FileGPS – Tracks and monitors file transactions across systems for visibility and SLA compliance.
  2. Monitor – Provides real-time monitoring of processes, system health, and metrics with alerts.
  3. AI – Adds intelligence through predictive analytics, anomaly detection, and GenAI insights.

Who should use this Installation Guide?

This installation guide is designed for a wide range of technical users involved in deploying, maintaining, or supporting IANN solutions. The intended audience includes:

  1. System Administrators / DevOps Engineers
  • Primary audience.
  • Responsible for deploying, configuring, and maintaining the software in different environments (development, staging, production).
  • Use the guide to ensure all prerequisites, system settings, and deployment steps are correctly followed.
  1. IT Support Teams / Technical Support Engineers
  • Use the guide to troubleshoot installation-related issues reported by end users or internal teams.
  • May use it to replicate the installation process for issue diagnosis.
  1. Software Developers
  • Particularly when working in teams or setting up the project locally.
  • Use the guide to set up their development environments and test deployments.
  1. Customers / End Users (for On-Premises Software)
  • If the product is delivered for self-hosting, customers’ technical teams use the installation guide.
  • Often non-developers with technical backgrounds follow these instructions.
  1. Consultants / System Integrators
  • Third-party professionals who assist organizations in setting up and customizing the product.
  1. QA Engineers / Testers
  • Use the guide to install and configure the software in test environments to validate features or bug fixes.

What this Installation Guide Covers?

This guide serves as a comprehensive manual for deploying and configuring the IANN platform in various environments (Linux, Windows, OpenShift). It covers the following:

  1. Purpose and Overview of IANN
  • Clarifies the objective of the document and introduces the IANN FileGPS platform.
  1. System Architecture and Deployment Models
  • Details the architecture of IANN FileGPS and how it integrates with the broader ecosystem.
  1. Step-by-Step Installation Instructions
  • Linux Installations: Covers UI, Server, Rest Consumer, and Client Server deployments.
  • Windows Installations: Includes setup with NSSM, password encryption, component deployments, and validation.
  • OpenShift Installations: Provides detailed guidance on Helm charts, UI/backend configurations, and deployment steps.
  1. Anomalies Detection Modules
  • Deployment instructions for File Anomalies and Transaction Anomalies, supporting both Linux and Windows environments.
  1. IANN Monitor Deployment
  • Instructions for installing IANN Monitor on Linux, Windows, and OpenShift.
  • Includes pre-requisites, system validation, Helm-based deployments, and anomaly detection features.
  1. IANN File Transaction Search
  • Covers deployment and configuration of the File Transaction Search module across platforms.

Each section includes detailed prerequisites, component-level configuration, and post-deployment validation steps to ensure successful setup and operation of IANN in production and non-production environments.

1. Introduction to IANN FileGPS

Pragma Edge’s IANN FileGPS is a robust, end-to-end file monitoring and tracking solution designed to provide organizations with real-time visibility and control over file flows across distributed IT environments. It captures, aggregates, and contextualizes file-related events—enriching them with business context to enhance operational transparency and support informed decision-making.

Beyond traditional monitoring, IANN FileGPS empowers business users to define and manage SLAs (Service Level Agreements) that are closely aligned with business outcomes. SLAs can be configured at multiple levels—such as partner, enterprise, business unit, or even individual file or transaction—enabling fine-grained control and ensuring operational accountability.

With its centralized web interface, powerful alerting mechanisms, and audit-ready architecture, IANN FileGPS ensures that business-critical file processes are continuously and reliably monitored. The solution is built with scalability and security in mind, supporting both on-premise and cloud-based deployments to suit diverse enterprise needs.

2. Architecture of IANN FileGPS

The IANN FileGPS System provides a comprehensive monitoring and alerting solution for file transfer operations across enterprise systems. It ensures timely delivery, transparency, and accountability of file movement by integrating with various source systems and presenting the data through a centralized server and intuitive user interface.

 

A screenshot of a computer AI-generated content may be incorrect.

Below is a breakdown of the end-to-end flow involved in the IANN FileGPS:

·       IANN FileGPS Client: The IANN FileGPS Client is responsible for capturing events from diverse enterprise sources such as databases, logging systems, applications, and APIs. Each event is transformed into a standardized, canonical JSON format to ensure consistency across the system. The client then transmits these events securely either via a REST Consumer API over HTTPS to the IANN FileGPS Server or by publishing them to a designated Apache Kafka topic. This flexible transmission mechanism supports both synchronous and asynchronous data flows, enabling real-time or near-real-time event ingestion based on deployment requirements.

·       Event Ingestion: The IANN FileGPS Server is designed to reliably ingest and persist event data received from the IANN FileGPS Client. When data is transmitted via the REST Consumer API, the server’s REST consumer component processes the incoming canonical JSON payloads and stores them directly into the underlying database. Alternatively, if Kafka is used for transmission, a dedicated Kafka consumer service on the server continuously reads from the configured topic, processes the event data, and commits it to the database. This dual-ingestion mechanism ensures flexibility and resilience in handling both synchronous and asynchronous event flows across diverse enterprise environments.

·       User Access & Web UI: The IANN FileGPS platform provides a secure and intuitive web interface accessible via Single Sign-On (SSO) with SAML 2.0 or through local authentication using JWT-based tokens. The system enforces robust Role-Based Access Control (RBAC) to ensure users have appropriate access based on their roles and responsibilities. Once authenticated, users can leverage powerful features such as real-time file search, interactive dashboards displaying active file events, SLA compliance status, configurable alerts, and comprehensive historical reporting. The platform is designed with enterprise-grade security, supporting session timeouts, token expiry, and full audit logging to meet compliance and governance requirements.

·       Alerting & Notifications: The Alerting & Notifications component of the IANN FileGPS platform provides real-time monitoring of critical file and transaction-related events, enabling timely detection and resolution of operational issues. It supports a robust framework of one-time, rule-based alerts, which are evaluated continuously based on system and user-defined conditions and  supports multiple alert types—SLA, FLA, FNR, TNR, TRA, and Subscription Alerts—with notifications dispatched via email, the IANN dashboard, and REST API callbacks to external systems like ServiceNow, PagerDuty, Slack, and Microsoft Teams.

·       Reporting & Audit Trail: The IANN FileGPS platform provides a robust and extensible Reporting capability that delivers actionable insights into both file and transaction-level activities. It offers a wide range of out-of-the-box reports to track SLA adherence, file lifecycle events, failure patterns, and system performance. Additionally, the platform empowers users to build custom reports by applying business-specific filters, data views, and logic tailored to operational needs. This comprehensive reporting layer enables stakeholders to monitor real-time operations, conduct historical analysis, identify trends, and support audit and compliance initiatives—making it a core enabler of data-driven decision-making within the enterprise.

3. IANN FileGPS OpenShift Deployment

1. Architecture of IANN FileGPS in OpenShift

2. Prerequisites for IANN FileGPS Deployment in OpenShift

Before proceeding with the installation of the FileGPS application in an OpenShift cluster, ensure the following prerequisites are in place:

·       Access to the OpenShift Cluster: Verify that you have the necessary permissions to access and manage the target OpenShift cluster.

 

·       FileGPS Helm Package: Ensure you have the Helm package required for deploying the FileGPS application.

 

·       Container Registry Access: Obtain the credentials needed to access the container registry that hosts the FileGPS application images.

 

·       Access to the FileGPS Namespace: Confirm that you have the appropriate permissions to create and manage resources within the designated namespace in the OpenShift cluster.

 

2.1 Platform Supported Model and Delivery

The FileGPS application can be deployed on the following platforms:

·       RedHat OpenShift Container Platform >= 4.14

·       IBM Cloud

·       AWS Cloud

·       Azure Cloud

·       On-Premises Infrastructure

2.2 Versions

            ·       FileGPS-6.4.0

2.3 Download and Transfer the Helm Package

Start by downloading the FileGPS Helm package. Transfer the package to a Linux backend that has access to the OpenShift cluster where the deployment will take place. After transferring the package, extract it using the following command:

tar -xvf <helm_package_name>

2.4 Create a Namespace in OpenShift

Before making any modifications to the Helm charts, it is essential to create a dedicated namespace in the OpenShift cluster for the FileGPS application. Execute the following command to create the namespace:3

oc create namespace <namespace_name>

2.5 Create an Image Pull Secret

Once the namespace has been created, the next step is to set up an image pull secret. This secret contains the necessary credentials to authenticate with the container registry and pull the required FileGPS images. Use the following command to create the secret:

oc create secret docker-registry <secret_name> \
  –docker-server=<your-registry-server> \
  –docker-username=<your-username> \
  –docker-password=<your-password> \
  -n <namespace>

By completing these steps, you ensure that the OpenShift cluster is properly configured to authenticate with the container registry during the deployment process.

3. Helm Charts Changes

To further customize the deployment, additional changes can be made to the Helm charts. Begin by opening the values.yaml file in a text editor of your choice on the Linux backend.

3.1 Secret file Configuration

  • In the root path of helm chart there will be a file name “appsecret.yaml” . Here we would add the passwords and keep it in secret.
  • Here please provide the values which need to be kept in the values.yaml
  • In DB_PASS provide UI database password

apiVersion: v1

kind: Secret

metadata:

  name: ui-db-secret

type: Opaque

stringData:

  DB_PASS:            # provide the Original Password for the PostgresSQL/Oracle

  • Give the SMTP Username and Password for the backend
  • Give the DB username and password in postgres-username and postgres-password. Here password should be given in AES256 encrypted form

 

apiVersion: v1

kind: Secret

metadata:

  name: application-secret

type: Opaque

stringData:

  encrypted-smtp-password:

  encrypted-postgres-password:

  encrypted-sterling-db-password:

  key-store-password:

  mail-truststore-password:

  encrypted-postgres_purged_db-password:

  encrypted-oracle_db-password:

  encrypted-oracle_purged_db-password:

  encrypted-rest-authentication-password:

  encrypted-rest-authentication-cert_password:

 

  • Apply the file by the command oc apply –f app-secret.yaml to get secrets created

3.2 Service Account Configuration Section

Give the service account name.

serviceAccount:

  create: true

# Set to true to create a new ServiceAccount; false to use an existing one

  name: “filegps-test”                    

# Provide the ServiceAccount name if it’s already created manually

3.3 Security Context and User id configuration

To ensure that the FileGPS application can access the filesystem correctly, you need to specify the user ID (UID) under which it will run. Additionally, configure the filesystem group ID (GID) and supplement the group IDs (Supplementary Groups) to grant the necessary permissions for accessing the filesystem.

By default, FileGPS application runs with 1011 as default UID

security:

  runAsUser: 1011 #specify the custom user to run the container

  supplementalGroups:

    – 555

  fsGroup: 1011 #specify the custom group to run the container

3.4 Imagepull secret and Pull Policy

·       imagePullSecrets: Refers to the secret used for pulling images from the container registry. Replace the “test” with your actual secret name.

·       Also specify the Image Pull Policy such as Always pulls every time, IfNotPresent pulls if not local, and Never uses only local images.

 

imagePullSecret:

 # Pre-requisite: manually create this image registry secret for pulling the images using the command line

imagePullPolicy: Always            # Specify the image pull policy

 

4. UI Configuration

4.1 Image Configuration

       ·       Give the UI image name, tag.

 

ui:

  replicaCount: 1                      #Provide number of pods

  image:

    repository:                        # Container image for the UI

    tag:                               # Specifiy image tag to use

4.2 UI Pod resource configuration

  •        Give the resource CPU and memory limits for Image pod.
  •        Give the hostname as well for accessing UI. If you didn’t specify the hostname DNS will  gets automatically generated

resources:

    requests:

      memory: “1Gi” #specify the memory request as needed

      cpu: “1000m”  #specify the cpu cores request as needed

    limits:

      memory: “2Gi” #specify the maximimum memory a pod can utilize

      cpu: “2000m”  #specify the maximimum cpu a pod can utilize

  hostname:

  •   Provide Uiconfig details such as accept-licence, filegps-deployment, enterprise-edition, reprocess, mailbox and events-count-limit etc
  • Provide superset-edition as true. If you want to integrate filegps with superset

  uiConfig:

    accept-licence: true

    filegps:

      color: Red

      filegps-deployment: false # default value is true for PragmaLogo

      enterprise-edition: false # default value is true for All Features

      superset-edition: false #default value is true for Superset Embedded Dashboard Feature

      reprocess: true # default value is true for reprocess Feature

      mailbox-value: 5

      events-count-limit: 1000 # default value is 100 for Events Search limit

      apps-enabled:          #Licensing Key will be shared by the Product Delivery Team

4.3 Server port and logger configuration

·       Provide the Port number of the UI service and logging level

uiConfig:

   server:

     port: 8787            # Port number for the UI service

     serverHeader: FileGPS

   logger:

     level: DEBUG          # Logging level (DEBUG/INFO/ERROR)

     retentionPeriod: 10

4.4 UI SMTP configuration

·       Provide the details SMTP details such as host, port, username, from,app-contact-mail,mail-signature, properties

·       To enable mail service with SSL, set ui.uiConfig.email.properties.ssl.truststore.enabled to true. Generate the truststore and specify its name using ui.uiConfig.email.properties.ssl.truststore.truststore-name. Ensure the truststore is placed in the Helm certs folder.

email:

      host:                  # SMTP host for email notifications

      port:                  # SMTP port for email notifications

      username:              # SMTP username

      password:              # Kubernetes Secret containing SMTP password

      from:                  # Sender’s email address

      app-contact-mail:      # Contact email for support

      mail-signature:        # Email signature text

      properties:

        smtp:

          auth: true

          starttls:

            enable: true

          ssl:

            truststore:

              enabled: false

              truststore-name:

              truststorePassword:

·       Provide JWT secret key and session expire

       jwt:

      secret-key:               # Secret key for JWT authentication

      session-expire: 60        # JWT session expiration time in minutes

4.5 SAML configuration

       ·       Set enabled as true. If you want to configure saml

       ·       Provide SAML configuration details such as sso-url, metadata file name,private-key-file-               name,certificate –file-name

       ·       Place the metadata file, private key file name(tls.key – server/cluster private file), certificate         key file name (tls.crt – server/cluster certificate file ) in the certs folder

    saml:

      enabled: false                        # Enable/disable SAML authentication

      sso-url:                             # SAML Single Sign-On (SSO) URL

      sloUrl :                             # The Single Logout (SLO) URL for handling user logout via SAML.

      app-slo:                             # Application-specific Single Logout (SLO) configuration

      idp:

        metadata: metadata.xml # SAML metadata file; place inside the `certs` folder

        registration-id: fileGPS # Identifier used for registering with the IdP. It must match the IdP configuration

      idp-groups-role-mapper:              # Map SAML groups to application roles

      jwt:

        secret-key:         # Secret key for JWT in SAML

        session-expire: 60                 # JWT session expiration for SAML in minutes

      content:

        signing:

          credentials:

            private-key-file-name:         # provide private key file used for signing SAML requests; place it inside the certs folder

            certificate-file-name: # provide certificate file name used for signing SAML requests; place it inside the certs folder

4.6 Timezone, SSL and datasource configuration

  •           Provide the timezone as required
  •           Provide the keystore file which should be placed inside the “certs” folder.

   timezone:

      enabled: true                        # Enable/disable timezone settings

      zoneId: America/Chicago              # Timezone to be used

    login:

      max-false-attempts: 5 # count

      reset-false-attempts: 30 #Minutes

    ssl:

      enabled: true                       # Enable/disable SSL for the UI

      key-store:                           # Keystore file for SSL; place inside the `certs` folder

      key-store-type: PKCS12               # Keystore type

      password: “”                           # Secret containing the keystore password

      enabled-protocols:

        – TLSv1.2

        – TLSv1.3

      ciphers:

        – TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256

        – TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256

        – TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

        – TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

    datasource:

      url:      # Database connection URL

      username:                   # Database username

      driver-class-name: #org.postgresql.Driver #oracle.jdbc.driver.OracleDriver  # JDBC driver class name

5. Backend Configuration

5.1 Image configuration

  • Give the UI image name, tag.

backend:

  image:

    repository:                   # Backend container image

    tag: “”                      # Image tag for backend

5.2 Backend Pod resource configuration

  •           Give the resource CPU and memory limits for Image pod.

resources:

    requests:

      memory: “1Gi”             #specify the memory request as needed

      cpu: “1000m”              #specify the cpu cores request as needed

    limits:

      memory: “2Gi”             #specify the maximimum memory a pod can utilize

      cpu: “2000m”              #specify the maximimum cpu a pod can utilize

5.3 Persistent Volume Configuration

  •          Provide the persistent volume details.

  persistence:

    useDynamicProvision: true              # Use dynamic provisioning for PVC

    pvcName: “”                            # Existing PVC name if dynamic provisioning is disabled

    accessModes: ReadWriteMany

    storageClassName:                      # Storage class to use

    storageSize: 1Gi                          # Size of persistent storage

 

  •           Provide timezone ID.
  •           Provide the SMTP details such as host, PORT, from username.

backendConfig:

    filegps:

      zoneid: America/Chicago

    email:

      host:                                # SMTP host

      port: 587                            # SMTP port

      from:                                # Specify the from mail id.

      username:                            # Specify the username exmaple: username@compnay.com

      secretName:                          # Secret for SMTP credentials

      email_authentication: true

 

5.4 Database details configuration

·       Provide database details based on the required database type. Specify the database type using backend.backendConfig.database_type.db_type. If using Oracle or PostgreSQL, provide the corresponding database details under the defined db_type.

·       Provide the purged database details for data retention.

   postgres_db:   

      database:                            # Provide the postgres DB name

      host:                                # Provide the postgres DB hostname

      username:                            # Provide the postgres DB username

      password: “”         # Provide the secret name which contains postgres DB encyrpted password

      schema:                          # Provide the postgres schema name

      port:                                # Provide the postgres DB Port number

      driver_name: postgresql              # Provide the postgres DB driver name

    postgres_purged_db:

      database:                            # Provide the postgres_purge DB name

      host:                                # Provide the postgres_purge DB hostname

      username:                            # Provide the postgres_purge DB username

      password: “”         # Provide the secret name which contains postgres_purge DB encyrpted password

      purged_schema:                        # Provide the postgres schema name

      port:                               # Provide the postgres DB Port number

      driver_name: postgresql              # Provide the postgres DB driver name

    oracle_db:

      database:                           # Provide the oracle DB name

      host:                               # Provide the oracle DB hostname

      username:                           # Provide the oracle DB username

      password: “”         # Provide the secret name which contains postgres_purge DB encyrpted password

      schema:                              # Provide the oracle schema name

      port:                               # Provide the oracle DB Port number

      driver_name: oracle                        #Provide the oracle DB driver name

    oracle_purged_db:

      database:                            # Provide the oracle_purged_db DB name

      host:                                # Provide the oracle_purged_db DB hostname

      username:                            # Provide the oracle_purged_db DB username

      password: “”         # Provide the secret name which contains postgres_purge DB encyrpted password

      schema:                              # Provide the oracle_purged_db schema name

      port:                               # Provide the oracle_purged_db DB Port number

      driver_name: oracle                       #Provide the oracle_purged_db DB driver name

 

5.5 Schedulers details configuration

·       Provide the scheduler configurations (in minutes) to run backend jobs if the configurations are missing from the database.

     schedulers:

      fnr_interval_minutes: 15

      fnr_group_interval_minutes: 5

      tnr_interval_minutes: 15

      tnr_group_interval_minutes: 15

      sub_interval_minutes: 10

      sub_group_interval_minutes: 10

      fla_interval_minutes: 10

      fla_group_interval_minutes: 10

      tra_interval_minutes: 15

      tra_group_interval_minutes: 15

      sla_interval_minutes: 5

      qda_mq_interval_minutes: 30

      calculate_sla_interval_minutes: 2

      calculate_transaction_sla_interval_minutes: 2

      update_context_interval: 300

      transaction_duration_interval_minutes: 15

      summary_time_to_start: “12:00 AM”

      retention_policy_time_to_start: “12:00 AM”

      agent_down_alert_interval_minutes: 60

      events_correlation_interval_minutes: 5

      qda_si_interval_minutes: 30

      update_context_interval_minutes: 300

5.6 App data configuration

·       Provide the FileGPS client database details

appdata:                              

      si_dbtype:

      jar_file:            # Place the Jar File inside the Pod (/IANN/FileGPS/FileGPS_Release/jars)

      java_classname:

      si_host:

      si_schema:

      si_username:

      secretName: “”     # Provide the secret name which contains encrypted-sterling-db-password

      si_dbname:

      si_port:

5.7 Kafka Configuration

·       Provide the kafka details

   filegps_config:

      is_nodeid: False

    kafka:

      topic_name:

      bootstrap_servers:

      timeout:

      num_messages:

      group_id:

      security_protocol:

      sasl_mechanism:

      sasl_username:

      sasl_password:

6. RestConsumer configuration

6.1 Image configuration

·       Give the Image repository and tag.

rest:

  image:

    repository:                            #REST consumer container image

    tag:                                 #Image tag

6.2 Resource configuration

·       Provide the resource limits such as CPU and Memory as needed.

resources:

    requests:

      memory: “1Gi”              #specify the memory request as needed

      cpu: “1000m”               #specify the cpu cores request as needed

    limits:

      memory: “2Gi”              #specify the maximimum memory a pod can utilize

      cpu: “2000m”               #specify the maximimum cpu a pod can utilize

6.3 Persistent Volume Configuration

·       Provide the persistent volume details

persistence:

    useDynamicProvision: true              #Use dynamic provisioning for PVC

    pvcName: “”                            #Existing PVC name if dynamic provisioning is disabled

    storageClassName:                      #Storage class to use

    storageSize: 1Gi 

·       Provide rest consumer hostname and the path where you want to store the logs

  rest_consumer_api:

    hostname: rest.example.com       #Provide hostname where you want to access

  logs:

    path: /opt/rest-consumer/logs/   #Provide path to logs

6.4 Authentication Configuration

·       Provide rest consumer authentication and SSL details

·       To enable SSL (use_ssl = true), generate a keystore in .pem format containing the cluster’s SSL certificates. Place the generated keystore in the certs folder for automatic secret generation. Specify the keystore name in the configuration under rest.authentication.keystore_name.

  authentication:

    auth_type: basic_auth

    username: filegps

    password: “”   # Provide the secret name which contains encrypted-rest-authentication-password              

    keystore_name: keystore.pem #generate the keystore.pem and place the pem file of the helm cert path

    cert_password: “” # Provide the secret name which contains encrypted-rest-authentication-cert_password

    use_ssl: true    #use ssl termination for rest consumer traffic

    use_ssl_password: true

6.5 Rest consumer Application Configuration

·       Provide zoneid and database type in the config section

     [filegps]

    zoneid = America/Chicago

    isNodeIdPresent = True

    updatecontext = 24

    duration = 3600000

    transactionperiod = 14

    clientduration = 86400000

    uncorrelated_deletion = 7

 

    [database_type]

    db_type = oracle

7. Client Configuration

7.1 Image configuration

·       Give the Image repository and tag.

  client:

  image:

    repository: # Container image for the UI

    tag:   # Specifiy image tag to use

7.2 Resource configuration :

·       Provide the resource limits such as CPU and Memory as needed.

  resources:

    requests:

      memory: “1Gi” #specify the memory request as needed

      cpu: “1000m”  #specify the cpu cores request as needed

    limits:

      memory: “2Gi” #specify the maximimum memory a pod can utilize

      cpu: “2000m” #specify the maximimum cpu a pod can utilize

8. Installing Application

  • After updating all the values in values.yml we need to install the helm which deploys all the resources in templates file.
  • Save the values.yaml file.
  • To install the helm chart:
    helm install <release_name>    -f <path of values.yaml>  <path/to/helmchart>
  • To upgrade the helm chart:
    helm upgrade <release_name>   -f <path of values.yaml><path/to/helmchart>
  • To roll back the helm chart by giving revision number:
    helm rollback <release_name> <revision_number>