IANN AI User Manual
This Manual serves as a comprehensive guide to understand and utilize the IANN (Intelligent Artificial Neural Network) system effectively. It provides essential information about the system’s purpose, core components, and operational guidance to help users make the most of its capabilities.
IANN (Intelligent Artificial Neural Network) is Pragma Edge’s AI-powered unified platform designed to bring intelligence, automation, and predictive analytics into business operations. It combines file tracking, monitoring, and AI-driven insights to ensure end-to-end visibility, operational efficiency, and proactive issue resolution.
IANN plays a key role in driving digital transformation across industries by turning traditional data exchanges into intelligent, insight-driven processes.
IANN is built as a modular system with three primary components:
This manual is intended for individuals and teams within the organization who are directly or indirectly involved in overseeing, maintaining, and responding to file movement, alert notifications, and AI-based monitoring insights across systems using the IANN platform (FileGPS, Monitor, AI).
The following roles and departments should use this manual:
This manual serves as a unified guide for users of the IANN (Intelligent Automated Network Navigator) platform. It provides detailed, step-by-step instructions and contextual overviews across the five key IANN components: IANN FileGPS, File Anomalies, Transaction Anomalies, IANN Monitor, Monitor Anomalies, Error Recommendation, Error Reprocessing and File Transaction Search.
IANN FileGPS Application
IANN File Anomalies
IANN Transaction Anomalies
IANN Monitor Application
IANN Monitor Anomalies
IANN Error Recommendation
IANN Error Reprocessing
IANN File Transaction Search
This manual is designed to equip administrators, analysts, and monitoring teams with the knowledge needed to fully leverage the IANN platform’s capabilities in detecting anomalies, ensuring operational transparency, and maintaining high system reliability across file and transaction landscapes.
IANN File Anomalies is a core component of the IANN FileGPS system, purpose-built monitor and detect irregularities in file transfer activities across enterprise systems. This module plays a critical role in ensuring that files are received as expected, both in terms of time and volume. Irregularities such as missing files, unexpected file arrivals, or abnormal file counts can often signal potential issues that impact data reliability, business continuity, or regulatory compliance.
The File Anomalies module utilizes historical delivery patterns and predictive logic to identify deviations from expected behaviour automatically. This early detection capability empowers teams to act proactively and maintain operational health.
By continuously monitoring file activity, the File Anomalies module strengthens the overall reliability of data transfers, supports auditability and compliance, and helps safeguard the performance of file-driven business operations.
The IANN File Anomalies Detection System is designed to ensure the accuracy, integrity, and timeliness of file transfers within the IANN FileGPS platform. It leverages historical patterns, intelligent file classification, and AI-based prediction techniques to detect irregularities in file structures and naming conventions across managed file transfer workflows.
By analysing previously observed file patterns and validating daily file arrivals against those expected formats, the system identifies discrepancies such as format mismatches, missing files, and structural inconsistencies. It ensures that every file adheres to the expected schema and arrival behaviour as defined through historical learning.
The system supports automated daily executions, making it well-suited for enterprise environments where early detection of anomalies is crucial for maintaining data flow consistency and operational reliability.
Detected anomalies are enriched with metadata including file name, timestamp, error category, and severity level. These are logged systematically, enabling rapid triage, root cause analysis, and integration with existing alerting or ticketing systems.
Core Capabilities:
Table Name | Purpose |
---|---|
order_events_data | Input table with raw file event data. Cleaned and preprocessed for analysis. |
fgps_nxt_expe_date | Stores predicted the next file arrival dates based on historical trends. |
ian_file_anomalies | Captures final detected anomalies for reporting and analysis. |
1. Staging Table Workflow:
a) Data Preprocessing:
· Cleanses input data by handling missing values and formatting errors.
b) Frequency Pattern Detection:
· Identifies historical file arrival frequencies and stores them for comparison.
c) Boundary Calculation:
· Defining lower and upper thresholds for expected file counts.
· Any deviation beyond these is flagged as an anomaly.
2. Next File Date Prediction Table Workflow:
a) Connect to the fgps_nxt_expe_date table to retrieve recent data.
b) Filters record with a date less than the target date.
c) Uses frequency prediction logic to loop until a new date exceeding the target date is generated.
d) Deletes outdated entries and updates the table with the latest predicted dates.
3. Daily Anomalies Detection Workflow:
a) Load records from fgps_nxt_expe_date match the target date.
b) Predict expected file patterns using:
· Custom functions like replace_numbers
· Grouping operations for duplication
c) Pattern Embedding & Similarity Matching:
· Generate embeddings for both expected and actual file patterns.
· Perform similar search in both directions:
a) Missing Files: Files not received but expected.
b) Unexpected Files: Files received but not expected.
d) File Count Validation:
· For matching records, ensure file count lies between calculated thresholds.
· Mark deviations as anomalies.
e) All detected anomalies are concatenated and written to ian_file_anomalies.
FileGPS Transactions is a critical component designed to help organizations track and monitor the flow of transactional files with accuracy and predictability. In large-scale environments, where data is exchanged frequently between systems or partners, it is essential to ensure that transactions are received on schedule and in expected volumes to maintain operational continuity and trust in the data pipeline.
However, real-world processes are prone to deviations. Files may arrive late, be missed entirely, or contain unexpected volumes—all of which can disrupt workflows or indicate deeper issues. To proactively address these challenges, FileGPS leverages intelligent anomaly detection to flag irregularities and improve system reliability.
The Transaction Anomalies feature helps to detect and respond to unusual patterns in file activity, offering several key benefits:
This manual will guide through the features, user interface, and workflows of the Transaction Anomalies module in FileGPS.
Enter the FileGPS URL provided by administrator in preferred browser. Login with the credentials.
Navigate to “Search Files” –> “Anomaly Search” –> “Transaction Anomaly”
In the Transaction Anomaly tab, select preferred criteria such as date, client name, direction, sender/receiver ID, and more to filter and fetch anomalies specific to selection.
Next click the Search button to view the anomaly results, or use Reset to clear the filters and start a new search.
After clicking “Search”, all the anomalies as per the selected criteria will be displayed.
2.4.1. Missing anomalies
A Missing Anomaly indicates that an expected transaction file was not received on the scheduled date. This may point to a delay, a system issue, or an upstream failure. Such anomalies can disrupt downstream processes that rely on timely data.
It’s recommended to investigate the source and reinitiate the transaction if necessary.
Clicking “View Graph” displays a visual representation of the transaction history, where anomalies are clearly highlighted using a red dot as shown below.
2.4.2. Unexpected anomalies
An Unexpected Anomaly occurs when a transaction file is received on a date it was not scheduled for. This could indicate an unplanned transmission or a misconfigured sender schedule. It may pose risks to data integrity or compliance if not reviewed.
2.4.3. Count mismatch anomalies
A Count Mismatch Anomaly arises when the number of received transaction files differs from the expected count. This may signal missing duplicates or extra files due to retries or failures. It can impact systems relying on exact data volumes for processing.
The Subscription of Alerts feature in IANN FileGPS enables users to receive timely notifications whenever a transaction anomaly is detected. Users can customize alert settings based on specific criteria.
Once subscribed, users can choose their preferred notification method—via Dashboard, Email, or REST API—ensuring they are always informed about critical anomalies like Missing Files, Unexpected Transactions, or Count Mismatches. This helps streamline monitoring and ensures quick action on irregularities.
Admins can manage alert recipients, set priorities, and define the message content, providing flexibility and control over how alerts are delivered and handled.
As shown below, click on ` ` button on which an anomaly subscription is required,” Subscribe” button will be popped and click on it.
Once ‘Subscribe’ button is clicked it will be navigated to page as shown below.
Click on “Yes” button to proceed with subscription creation.
As part of the subscription process, an Entity has to be created for every. To create entity, select the ‘Entity Name’, ‘Entity id’ and ‘Time to Process’.
· Entity Name – Client name
· Entity id – unique id represents the client (can give same name as client name)
· Time to process – Time to complete the transaction activity to consider as successful transaction.
Once required details are provided, then scroll down and click the ‘Create’ button.
Now go back to anomalies page and click on ‘Subscribe’ button, it will navigate to page as shown below.
Now fill ‘Alert Name’, ‘Priority’, ‘Subject’ and ‘Body’.
Select ‘Notification’ management as per the requirement and click ‘Create’.
The above steps will successfully subscribe to the anomaly alerts.
IANN Monitor Anomalies is an advanced, proactive monitoring solution engineered to identify, explain, and visualize anomalies in time-series metric data. Designed for enterprise and integration platform environments, IANN enhances operational visibility, reduces system downtime, and supports self-healing capabilities through intelligent automation.
Traditional monitoring tools often rely on static thresholds, which can result in high false positives and insufficient adaptability. IANN addresses these limitations by leveraging machine learning to dynamically detect anomalies in critical performance metrics—such as memory usage, CPU load, queue sizes, and custom application indicators—without the need for manual configuration or labeled data.
· Anomaly Detection Using ML: Utilizes the Isolation Forest algorithm, a robust, unsupervised learning method tailored for anomaly detection. It detects rare or unusual data points without requiring labeled datasets.
· Contextual Explanation: Enhances anomaly insights using a built-in explanation engine (llm.py) powered by a large language model (LLM), which provides human-readable summaries of what caused the anomaly and its severity.
· Interactive Dashboards: Detected anomalies are visualized through a rich, filterable web interface. Users can drill down into time-series graphs to investigate issues in real-time.
· No Container Dependency: The entire solution runs natively on MobaXterm, a terminal client that supports Unix commands in Windows environments. This simplifies deployment in air-gapped or container-restricted enterprise environments.
· Threshold Management via UI: A web-based configuration panel allows domain users or system engineers to define, preview, and apply metric thresholds without modifying code or restarting services.
· Flexible Alerting Mechanism: Integrates with various alerting systems (e.g., email, webhooks) to trigger notifications based on configured anomaly patterns or threshold breaches.
· Significantly reduces manual monitoring and operational overhead
· Accelerates root cause analysis with AI-generated anomaly explanations
· Minimizes false positives through adaptive learning models
· Streamlines deployment without requiring Docker or Kubernetes
· Ensures auditability and transparency via persisted logs and model artifacts
· Site Reliability Engineers (SREs)
· DevOps and Platform Teams
· Monitoring and NOC Teams
· Application Support Engineers
The IANN Monitor Anomalies System is architected with a modular, lightweight design, optimized for rapid deployment, maintainability, and enterprise readiness. Built to operate seamlessly in container-restricted environments, it integrates key components data ingestion, anomaly detection, orchestration, and visualization—to deliver intelligent, explainable monitoring across distributed systems.
Implements Isolation Forest, a robust unsupervised machine learning algorithm ideal for detecting outliers in high-dimensional data.
1. *_anomaly_model.pkl: Persisted model files trained per datapoint/index.
2. *_baseline_stats.pkl: Stores baseline statistics including mean, median, and standard deviation for contextual scoring.
The model automatically retrains when user-defined thresholds are updated or when significant shifts in data patterns are observed—ensuring accuracy over time.
Built using Python’s native sched module, avoiding external dependencies for improved portability and simplicity.
Frequencies are defined in config.ini (e.g., every 30 minutes) for precise control of anomaly checks.
· Storage:
1. anomalies.csv: Stores all anomaly records with metadata.
2. threshold_tracker.csv: Maintains history of user-defined thresholds.
3. Elasticsearch Index (config[‘elasticsearch’] [‘index_name’]): Stores final anomaly results for UI and dashboard integration.
· Enrichment:
1. Anomaly records are supplemented with:
1. Deviation scores (e.g., close, moderate, far)
2. Natural language descriptions using llm.py
3. Context such as environment, metric name, timestamp
· Visualization: Output is consumed by a web-based dashboard that supports anomaly filtering, graph rendering, and alert configuration.
Runs entirely within MobaXterm, eliminating the need for Docker, Kubernetes, or third-party orchestrators—ideal for air-gapped or security-sensitive environments.
Each module operates independently, allowing for easy upgrades, replacements, or customizations.
Engineered for resilient, explainable, and scalable anomaly detection in complex production environments.
The IANN system is composed of modular components that work cohesively to detect, manage, and visualize anomalies. Each component is designed for a specific role and is loosely coupled for flexibility and maintainability.
· Purpose: Acts as the central data repository.
· Responsibilities:
1. Stores raw input metrics ingested from client systems.
2. Saves user-defined thresholds for anomaly detection (index_UI).
3. Hosts final anomaly detection outputs for visualization (config[‘elasticsearch’][‘index_name’]).
· Benefit: Scalable, searchable, and real-time accessible datastore for both input and output data.
· Purpose: Provides an intuitive interface for non-technical users.
· Key Features:
1. Preview and save threshold values for different metrics.
2. The view detected anomalies with filtering and graph capabilities.
3. Configure anomaly alert rules and notification settings.
· Benefit: Empowers business users and support engineers to interact with the system without editing code.
· Purpose: Manages the full lifecycle of ML models.
· Functions:
1. Checks for updated thresholds and triggers retraining.
2. Saves models and statistical baselines as .pkl files.
3. Ensure the latest models are applied to incoming data.
· Benefit: Ensures anomaly detection adapts to changes in baseline behavior.
· Purpose: Core logic for identifying anomalies.
· Technology: Implements the Isolation Forest algorithm.
· Process:
1. Fetches real-time data.
2. Applies the trained model.
3. Classify data points and assign severity labels.
4. Adds natural language explanations for detected anomalies.
· Benefit: Accurate and explainable detection of outliers in time-series data.
· Files:
1. anomalies.csv: Local record of all detected anomalies.
2. threshold_tracker.csv: Historical log of threshold settings per datapoint.
· Benefit: Enables offline analysis, version tracking, and auditing.
· Purpose: Visual front-end for monitoring and insight.
· Features:
1. Time-series graphs of metrics and anomalies.
2. Color-coded deviation markers (normal vs anomalous).
3. Drill-downs by index, metric, and environment.
· Benefit: Helps users visually interpret anomaly trends over time.
· Purpose: Automates periodic execution.
· Details:
1. Uses Python’s built-in sched module.
2. Scheduling interval configurable via config.ini.
3. Runs within the same terminal session (e.g., MobaXterm).
· Benefit: Lightweight, no external scheduler dependency (like corn or Airflow).
The IANN Monitor Anomalies system follows a structured pipeline to process metrics, detect anomalies, and present results. This pipeline is fully automated and runs at scheduled intervals defined by the user.
· Initiation: Triggered from the UI when users preview or save a threshold.
· Process:
1. The selected threshold value, along with the datapoint name, index, and environment, is written into the index_UI index in Elasticsearch.
2. These thresholds act as references for model training and anomaly scoring.
· Record Example:
json
{ |
· Benefit: Decouples threshold configuration from code changes, making the system user-friendly and flexible.
· Trigger Conditions:
1. A new threshold has been set.
2. The corresponding model does not exist yet.
· Actions Performed:
1. Compares the current threshold with historical data in threshold_tracker.csv.
2. If changes are detected, retrain the Isolation Forest model for that datapoint.
3. Saves two artifacts:
§ datapoint_indexname_anomaly_model.pkl: Trained anomaly detection model.
§ datapoint_indexname_baseline_stats.pkl: Baseline stats such as mean, median, standard deviation.
· Automation: Managed via model_management.py and train_and_save_model.py.
· Input: Live metric data is pulled from tyson_* indices in Elasticsearch.
· Workflow:
1. Loads the relevant .pkl model for the datapoint.
2. Scores each incoming data point to determine whether it’s anomalous.
3. Calculations:
§ Mean and median deviation.
§ Deviation label (e.g., close, moderate, far).
4. Enhances the result using llm.py, which adds a natural language explanation for the anomaly.
Anomaly detected: Heap usage increased significantly from the median baseline. Likely memory leak.
· Storage:
1. Locally: anomalies.csv
2. Elasticsearch: Output index defined in config[‘elasticsearch’][‘index_name’]
· Data Format:
Each anomaly record includes:
1. Timestamp
2. Metric value
3. Threshold
4. Deviation score and label
5. Explanation
6. Source index and environment
· Purpose:
1. Enables dashboard rendering.
2. Facilitates long-term tracking and alerting.
3. Allows integration with downstream notification systems.
Index Structure
The IANN system utilizes Elasticsearch indices for storing and retrieving different types of data involved in the anomaly detection lifecycle. Each index serves a dedicated purpose, ensuring organized and efficient access to time-series metrics, thresholds, and anomaly results.
1. Raw Metric Data
· Index Pattern: client_*
· Purpose: Serves as the primary input source for the system, containing historical and real-time metrics collected from client systems or applications.
· Data Contents:
1. Timestamps
2. Metric values (e.g., CPU, memory, heap usage)
3. Metadata (e.g., environment, server ID, application name)
· Usage: This index is queried during anomaly detection to fetch the latest datapoints to be analyzed.
2. Threshold Configurations
· Index Name: index_UI
· Purpose: Stores threshold settings submitted by users via the web UI.
· Data Contents:
1. dataPoint: Name of the metric (e.g., memory_utilization)
2. indexName: Source index to which the threshold applies
3. threshold: Numerical threshold value
4. environment: Context such as dev, uat, or prod
· Usage: Used during model training to determine whether to retrain and which threshold to apply for scoring anomalies.
3. Anomaly Output
· Index Reference: config[‘elasticsearch’][‘index_name’]
· Purpose: Acts as the final output repository for all detected anomalies.
· Data Contents:
1. Anomaly timestamp and metric value
2. Threshold and deviation scores
3. Severity label (close, moderate, far)
4. Natural language explanation
5. Contextual metadata (environment, indexName, dataPoint)
· Usage: Queried by the UI dashboard for visualization, filtering, and alerting purposes.
Note: These indices can be managed or scaled independently. Data retention policies, backups, and index lifecycle management (ILM) should be considered for long-term monitoring in production environments.
Scheduling & Execution
The scheduling and execution mechanism in the IANN Monitor Anomalies system is designed for simplicity, reliability, and low overhead. It ensures that anomaly detection runs consistently at user-defined intervals, without requiring external tools like cron jobs or third-party orchestrators.
1. Scheduling Logic
· Technology Used: Python’s built-in sched module.
· Why sched?
1. Lightweight and Python-native.
2. Avoids the need for system-level cron services or external schedulers like Airflow.
3. Simple to configure and maintain in self-contained deployments (e.g., via MobaXterm).
2. Configuration
· The execution interval is defined in the config.ini file under the scheduler section.
· Example configuration:
[scheduler]
scheduler_time = 30
· The interval determines how often the anomaly detection process runs (e.g., every 30 minutes).
3. Execution Instructions
· Once initiated, the script:
1. Loads all necessary configurations and thresholds.
2. Triggers model training if needed.
3. Performs anomaly detection based on the defined schedule.
4. Saves output to CSV and Elasticsearch.
4. Runtime Behaviour
· All executions occur within the same terminal session (e.g., MobaXterm).
· If the terminal or session is closed, the scheduled job stops unless re-launched.
· Logs and progress indicators are printed to the console for monitoring.
The IANN Monitor Anomalies system includes a lightweight, browser-accessible Web UI that enables users to interact with anomaly data, configure thresholds, and manage alerts. It is designed for ease of use by non-technical users such as support engineers, business analysts, and platform operations teams.
Anomaly Section Navigation
· Purpose: Allows users to test and store custom threshold values for each metric/index.
· Workflow:
1. Select an Index Name (e.g., client_prod)
2. Choose a Metric (e.g., heap_usage)
3. Enter a Threshold value (e.g., 0.02)
4. Click Preview to simulate the threshold’s effect
5. Click Save to persist the setting into the index_UI index in Elasticsearch
Configurable Anomaly Threshold
· The Threshold value is fully configurable by the user to control the sensitivity of anomaly detection.
· It defines the proportion of data points considered anomalous:
1. Lower values (e.g., 0.004) detect only the most extreme deviations.
2. Higher values (closer to 0.05) allow for broader anomaly inclusion, capturing more subtle outliers.
· Users can adjust this threshold dynamically to fine-tune detection based on system behaviour, dataset variability, or monitoring needs ranging from 0 to 0.05
The range between 0 and 0.05 includes thousands of possible values (e.g., 0.001, 0.0011, 0.0025, …, 0.0499), allowing for very precise tuning.
Even a small change in threshold (e.g., from 0.010 to 0.012) can significantly impact the number of detected anomalies in large datasets.
· Saved Thresholds Table:
1. Displays configured thresholds per index and datapoint
2. Supports edit and delete actions
· Benefit: Empowers non-developers to fine-tune detection logic based on their domain understanding.
The View Anomaly section serves as the central monitoring dashboard, providing users with real-time visibility into all detected anomalies across environments. It supports quick diagnosis, contextual filtering, and proactive subscription to anomaly alerts.
Key Features and User Flow
1. Time-Based Filtering
· A built-in date and time range picker allows users to narrow results to a specific analysis window.
· Supports preset ranges (e.g., “Last 5 mins”, “Last 1 hour”, “Last 6 hours”) and custom date-time selections.
· Once the range is selected, users can click Apply to refresh the anomaly view accordingly.
2. Metric-Based Filtering
· Dropdown menu titled “Filter Based on Metrics” enables filtering results by metric name (e.g., CPU Utilization, Network Traffic, Custom App Metrics).
· Helps users focus on specific performance indicators of interest.
· After selecting a metric, click Search to fetch relevant anomalies or reset to clear filters.
3. Anomaly Results Table
Displays all anomaly events matching the selected filters with the following columns:
· Timestamp: Exact time of the anomaly occurrence.
· Metrics: Name of the metric that triggered the anomaly.
· Value: Actual recorded value at the anomaly point.
· Weekly Hourly Min / Max / Avg: Statistical context comparing the anomaly against historical weekly baselines.
· Description: AI-generated explanation summarizing the anomaly, including severity and deviation range.
4. Action Buttons
Each row includes actionable options:
1. View Graph:
Purpose: Offers a detailed time-series visualization of anomalies over time.
Graph Features:
· Red Dots: Identified anomalies
· Blue Dots: Normal metric values
· Yellow Marker: The currently selected anomaly
· Zoom and pan support for easier navigation
· Context Window: Shows 7 days of data before and 3 anomalies before/after the selected point
· Chunk Navigation: Helps browse large time-series datasets
2.Subscribe:
Opens a subscription panel to configure Create Anomaly Alert.
Users can choose:
Alert Type: E.g. Critical, Warning.
Notification Channel: Email, etc.
Alert Name & Description: Define label and context for alert.
Upon creation, the alert is stored and actively monitored.
· Purpose: Configure automated notification rules for anomalies.
· Fields in the Configuration Panel:
1. Indices and Metrics to monitor
2. Alert Name and Description
3. Alert Type: e.g., spike detection, sustained threshold breach
4. Notification Channels: Email, webhook, or custom integrations
· Benefit: Ensures stakeholders are alerted in real-time without manual intervention.
This manual is intended to help users understand and use the Error Recommendation Tool, a component of the larger IANN FileGPS system. The tool assists in detecting and resolving file exchange issues between systems or applications.
In business processes, file transfers occur automatically between systems. Occasionally, these transfers fail due to incorrect setups, delivery issues, or communication failures. Such problems can disrupt operations and cause delays.
The Error Recommendation Tool simplifies this process. Instead of relying on technical teams, users can:
The tool is user-friendly, requires no technical expertise, and empowers users to take immediate action.
This workflow enables users to systematically identify, review, and reprocess failed file transactions, ensuring timely resolution and minimizing operational disruptions.
· Launch a web browser and enter the IANN FileGPS URL provided by your administrator.
· Log in with your username and password.
· Upon successful login, you will be redirected to the dashboard.
· From the dashboard, go to File Search → Error Step.
· Select a From Date, To Date, and choose the Step as Error.
· Click Search to view the list of failed transactions.
Click on action button for any particular file and then click on IANN Insight button
· Click the Action button for the relevant file.
· Select IANN Insight.
After clicking View Insight, the system will display:
· Detailed error descriptions
· Suggested resolution steps
If the result is unsatisfactory, click Regenerate to fetch updated suggestions from the LLM (Large Language Model).
What It Can Do
What It Cannot Do
This tool is designed to assist not replace technical teams. It functions as a smart, first- level responder for routine problems.
Think of the tool as a smart assistant for understanding file-related errors.
Workflow:
The Error Recommendation Tool is designed to make error handling more efficient.
Key Features:
Key Benefits
Decision Rules in IANN FileGPS allow you to automatically respond to common file errors based on defined logic.
This section walks you through how to create, view, edit, and update those rules from the UI.
To enhance automated recovery and minimize future errors:
· Navigate to Automated Fix module
1. Go to the “Error Management” tab at the top of the FileGPS interface.
2. Click on the “Create Rule” tab.
3. In the Create Decision Rule section, fill in the required fields: Fill in the following fields:
Enter a clear name to identify the rule.
Example: Mailbox Error
Provide a brief explanation of the error this rule will address.
Example: Handles errors related to missing or invalid mailbox configuration.
Select the type of handling from the dropdown menu:
a) Autofix – The system will try to fix the error automatically.
b) Manual – Requires user intervention to resolve.
c) Rule Based – Applies a set of pre-configured rules/actions.
4. Once all fields are filled, click the Create button to save your rule. If you want to cancel, click Cancel.
After a decision rule is created, it can be viewed, activated, deactivated, or modified from the Manage Rule tab.
Steps to View or Update a Rule:
1. Go to the “Manage Rule” tab under Error Management.
2. Use the filters to find the rule:
· Enter Decision Name
· Selecting Decision Type or Error
· Click Search
3. Click Search to view the list of matching rules.
What You Can Do Here:
· View Rule Details
– Check the type, error, description, and retry interval.
· Activate / Inactivate – Use the ACTIVATE or INACTIVATE buttons to control the rule status.
· Status Column – Shows if the rule is currently ACTIVE or INACTIVE.
· Retry Interval – Number of times the system will retry if the fix doesn’t work the first time.
When you click Edit Decision, you’ll be taken to the rule editor, where you can manage the actions, the rule performs when triggered.
Available Actions:
· Add a Rule
Select a rule from the left panel and click Add to assign it.
· Edit a Rule
Select a rule from the right panel and click Edit to change its configuration.
· Remove a Rule
Click on a rule in the Rules Applied list and select Remove.
· Reorder Rules (Optional)
Toggle Enable Drag to rearrange the order of execution.
Once all changes are made, click Update to save or Close to cancel.
Reusable rules define specific actions that can be triggered as part of a decision rule. Once created, these rules can be added to multiple decision rules when defining how to fix or handle an error.
3.4.1 Creating a Rule
To create a reusable rule:
1. Go to the Error Management tab.
2. Click on the Create Rule tab at the top.
You will be presented with the Rule Details form.
Fill in the Following Fields:
· Rule Name
Provide a clear, descriptive name for the rule.
Example: Validate Drop Directory
· Action
Select the technical action or process the rule should perform from the dropdown list.
Example: API_PICKUP_BP_1MIN or AFTRoute
Once the required fields are filled:
· Click Create or Save to store the rule.
3.4.2 Managing Existing Rules
To view or manage already created rules:
1. Navigate to the Manage Rule tab.
2. Use the Rule Name or Action filter to find the rule you want.
3. Click Search to view a list of matching rules.
Available Options
Click the three-line menu next to any rule to access the following actions:
· Edit – Modify the rule name or change the associated action.
· Delete – Permanently remove the rule.
This step helps ensure similar errors in the future are automatically routed or addressed with pre-defined logic.
This manual is created to help users understand and use the File Transaction Search feature available through the FileGPS platform. This feature allows users to ask questions about files such as their status, direction, or partner in plain language, without needing technical knowledge.
It is designed to make daily work easier by allowing users to get the answers they need quickly and clearly. Instead of waiting for technical teams or browsing large databases, users can simply type their question and get a meaningful response through the assistant.
The goal of this feature is to help business and support teams quickly locate details about file transactions such as:
It allows non-technical users to gain insights that traditionally required knowledge of SQL queries or backend systems. By simplifying access to this data, the feature enhances productivity and reduces reliance on IT teams.
What can it do:
What it cannot do:
This feature acts like a smart assistant behind the scenes. When a user types a question, the following steps happen:
Users only need to ask the question in their own words. The system handles the rest, making the whole process feel conversational and natural.
To begin using the File Transaction Search feature:
This simple process helps avoid opening complex dashboards or running manual searches, everything is delivered in a single step.
User:
“What happened to file SalesReport_India?”
Chat Assistant:
“The file SalesReport_India was received at 3:10 PM and completed successfully at 3:15 PM.”
User:
“Check ABC_File_20240721”
Chat Assistant:
“The file ABC_File_20240721 failed due to incorrect format. Please check the configuration.”