top of page

01. An Idea for Realising Continuous Risks Monitoring Leading to Continuous Auditing Practices

Continuous Risk Monitoring leading to Risk-based continuous auditing practice

01. An Idea for Realising Continuous Risks Monitoring Leading to Continuous Auditing Practices

There are numerous ways/architectures to achieve continuous monitoring. We have to choose an optimal architecture/solution based on our specific needs and the domain/context that we are addressing.

 

I am suggesting a data driven automated or semi-automated solution architecture to address the problem of continuous risk monitoring/audit monitoring.

 

The approach is given in the below diagram:

 

 

Some strategic assumptions based on which the solution is outlined are given below:


  1. The company has a active data warehouse or similar solution.

  2. The company may have implemented continuous controls monitoring at least on some of the key controls that can be automated. This data also flows into the data warehouse.

  3. Data quality, including meta-data of data, is admirable and the data is easily available for the businesses based on a secure access control mechanism.

  4. Data duplications are avoided, and golden source of data is tagged. One or more golden source of data for GRC activities (Risk Management, Controls Testing, RCSA, Process Design, Internal Audits, Information Security, Data Privacy, Policy Management, Third Party Management, ITSM, IT Operations) is/are pumping its data into the data warehouse.


Some technical assumptions based on which the solution design is contingent upon:


  1. The continuous monitoring platform is based on a big data architecture and is supporting all directorates/divisions as a common service.

  2. Data is being pumped continuously (on create/on update) into the platform preferably through big data processing tools like Kafka/Apache Spark.

  3. Data is being stored in the event processing (like Kafka) system to extract performance.

  4. An AI-ML model to calculate the risk model score, risk model cohesion score and model rating etc., has been deployed which is consuming data from the data event streams.

    1. AI-ML has gone through the standard stages - /feature selection/feature engineering/model evaluation/model selection etc., before deployment.

    2. AI-ML model is continuously trained with new data.

    3. Common model risks (overuse, bias, immature AI, data bias) are continuously monitored and addressed.


The AI-ML model in production should addresse the below concerns.


  1. Fair/Not biased.

  2. Robust/Reliable

  3. Privacy

  4. Safe/Secure

  5. Responsible/Accountable (Governance)

  6. Transparent/Explainable

The solution is expected to work as below:


1.      Data Lake/Data Warehouse Layer


In this layer the upstream business and GRC activities collect data and pump into the data lake/data warehouse layer.

Some of the activities that can stream data into this data store are as follows:

 

·         Curated and Processed GRC Intelligent Feeds

·         Issues/Actions

·         Incidents

·         Problems

·         Cases

·         Escalations

·         External Directives

·         Internal Directives

·         BCM – Crisis

·         BCM – Failed Exercises

·         Loss Event or Risk Event

·         Risk appetite data

·         Process data/Business value data

·         Assessable Items data

·         Control Test Failures

·         KPI/KRI/KCI Metrics threshold data breaches

·         Risk Assessments

·         Audit Assessments

·         3rd Party Ratings (such as SASB, Ecovadis, D&B, Secure scorecard, Bitsight etc.,)

etc.,



The data and meta-data qualities should be impeccable, and the appropriate data governance process should be in place. We should be mindful here as – garbage-in is garbage-out!


2. Calculation Layer


In the calculation layer, a tested/trained AI-ML model should be deployed. (I am avoiding the gory details of the infrastructure deployment details for brevity). The model should follow the standard model design process including but not limited to feature engineering, feature selection, model evaluation, model selection etc.,

 

This model should address the common AI-ML risks like misuse/overuse, creator bias, discrimination/exclusion, data bias etc.,

 

The following aspects should be taken care by proper continuous evaluation/testing of the AI-ML model on a periodic basis:

·         Fair/Not biased.

·         Robust/Reliable

·         Privacy

·         Safe/Secure

·         Responsible/Accountable

·         Transparent/Explainable

 

I vaguely recall that the model success rate of AI-ML models is around 10-30% in a large organization with a data science team of about 350+ personnel with a throughput of about 1000+ model evaluations in a year.


Also, a simple aggregating model based on the above incoming data sets can be built to arrive at the desired outcomes which can then be enriched to add additional context including ancestor risks and cohesive risks.


3.      Business Services Layer


The data stream that is coming out of the AI-ML model output should be stored in some storage solutions (like NoSQL DBs).

Further analysis should happen on the outcomes hence, this re-processed data should be pumped into different streams pertaining to the directorates or risk-categories or organizational groups/functions.

 

Different data spouts corresponding to the organizational layers should be placed in the data streams to tap the data stream.

The organizational layer-0 data may correspond to the operational level where the usual manual op-risk assessments happen. The data then passes through another spout where the layer-1 data may be tapped which may correspond to the tactical organizational layer where risk score/rating aggregation may happen.

Finally, the layer-3 data corresponds to the strategic org level and hence the nature of the risk, risk appetites, risk score aggregations etc., may vary. This may be group functions or directorates.

 

Additional layers can be built as desired by the organizations considering this design. Please note the aggregating impact of data assessments like risk assessments from the lower organizations to the higher level organizations. For example, a given orgnizational node can perform its own assessments and may also receive the assessments data from its child nodes in a aggregated manner.

 

The risk monitoring at the 3 layers can in turn be monitored by the IA (Internal Audit) function thus paving way for the first step in the continuous audit process leading to dynamic risk-based audits.

 

The strategic layer outcomes and the feedback from IA function can then be served to the different committees and the board of the company in a desired format.

© 2035  Powered and secured by Wix

bottom of page