Continuous Monitoring Rollout - Design

Continuous Monitoring Rollout – Design

Posted on Posted in Continuous Monitoring

Continuous Monitoring evaluates transactions and master data to detect and report on a timely basis on variations to the expected results of business controls. Difficulty of automated monitoring of internal controls is not related to complexity of software, highly advanced analytic logic or vast amounts of data. Those are frequently quite simplistic in comparison with advanced predictive analytics or data models. Difficulty is in making sure that you start in right place with right people, processes, tools, and documents.

Rollout Approach

Continuous Monitoring implementation can be a challenging process, but also it can bring significant added value to the organization. Every rollout can be divided into three phases:

  1. Design – selection of area, analytics and ownership definition, functional requirements and source data selection.
  2. Development – technical documentation preparation, interface establishment, server side configuration as well as system and user acceptance testing.
  3. Implementation – workflow assignment, initial user configuration, move of script to production, stabilization and training, on-going improvement of analytics

In this article, I would like us to focus on a first phase of rollout process – Development.

Analytic Selection

Planning is critical to achieve success during Continuous Monitoring implementation. One of the most important questions to answer is where we should start? Which processes and controls should be first candidates for automation with data analytics?

There are several factors that should be taken into consideration while selecting area for rollout of CM analytics.

  • Existing mature organization with sound controls environmentit’s much easier to work with areas where internal controls operation and purpose is already well known. Continuous Monitoring rollout is easier, when it’s a next step on a control maturity ladder.
  • Existing leverage of data analytics with opportunity to automate if there is area where there is already some usage of data analytics (i.e. manual analysis in tools like ACL or IDEA or some degree of automation with Excel VBA) it’s much easier to define functional requirements for analytics.
  • Stable and unified IT environment for source datathere is certain degree of effort dedicated towards implementation, so it is a good practice to make sure that it won’t all go to waste in 3 or 6 months with a change of source system platform.
  • Centralized processes with wide control coveragefirst of all that’s in line with “biggest bang for your buck” – if you can cover with your analytics expense reports of 50000 employees globally, it will provide much more value than review of expense reports for 200 employees from one country. On the other hand, “Centralized” responsibility ensures that it will be much easier to define analytics and establish responsibility for exception resolution.
  • Large size of transactional data and complex source structureif analytic can be done and performed in few minutes in spreadsheet, its automation might not bring benefits like cost or time savings.
  • Support and improvement of known and recurring compliance issuesreading audit reports from internal or external auditors should serve as a great inspiration to see where are the issues. Reading reports from few periods can bring you some idea which issues are recurring – those can be very good candidates to address with Continuous Monitoring where management can start to receive immediate feedback if something is not up to expectations.

Risk Based Approach

Definition of test list should be preceded with proper risk analysis that should drive selection and prioritization of analytics. Each analytic should be assigned to one or more risks it is related to. Both control risks and inherent risks should be taken into consideration.

Following examples of risks should be considered as good candidates to be addressed by Continuous Monitoring analytics:

  • Human error in data input to information systems not caught by existing application controls (duplicates / data completeness and accuracy)
  • Potential unauthorized or fraudulent transactions performed by disgruntled employees.
  • Procedures and policies compliance requirements not enforced by application controls

Not including risk analysis in your decision process might result with designing great, advanced and complex analytics that are not addressing properly organizational risks, which might make them not sufficiently relevant for compliance or audit purpose. It is not necessarily bad if intended, but then you will not achieve benefits related to improvement of audit or compliance areas.

Correlation with existing controls

Continuous Monitoring analytics as automated controls don’t work independent of rest of control environment. There should be a firm part of it.

Each analytic selected should be reviewed, if it can be linked to any existing controls (SOX, operational and financial) or to monitored Key Performance Indicators. Such review will ensure that automated analytics support other controls with full potential. On the other hand, existing controls should not limit design of new analytics to allow for improvement of logic and coverage through automated approach.

Analytic Ownership

Each analytic should have clearly defined owner. Owner should be selected in correlation with existing business and account responsibility. Each owner should have organizational possibility to remediate, document and effectively influence proper remediation of identified exceptions by himself or with support of his team.

Owner Responsibilities

Every analytic owner should properly understand all responsibilities related to this role.

  • Participation in the process of analytic design through definition of functional requirements.
  • User Acceptance Testing of new analytics to confirm proper design of analytic logic, as well as confirm correctness of source data and exception output.
  • Establishing work instructions that define process and approach for resolution of every type of exception.
  • Ensuring that all exceptions are being properly resolved on time and sufficiently documented within the system.
  • User’s assignment and supervision of exception resolution process.
  • Ensuring continuous improvement of both analytic and exception resolution process

Users Responsibility

Users should be aware, which exceptions should be resolved by them based on clear communication from analytic owner and work instruction.

Analytic owner should keep direct influence over all users to be able to retain responsibility over whole analytic. Geographical disperse of users and / or lack of clear formal business relation between them and analytic owner might cause significant problems with timely exception resolution. Such situation should be avoided if possible. In cases where it is not possible, proper awareness should be built by analytic owner and supported by senior business executives.

Segregation of Duties

Users assigned and involved in exception resolution process should avoid potential segregation of duties conflict. That means that users that might be involved in creation, update or approval of elements being subject of automatic control should not be the one resolving exceptions.

In specific cases (limited team size) such situation might be allowed, but analytic owner should be aware about risks related to it and mitigate them if possible.


All analytics should have properly defined functional requirements with following elements included

  • Objective – What we are trying to achieve?
  • Control Description – What will control do?
  • Source Data Definition – What source data is needed to run the analytic?
  • Detailed Analytic Description – How analytic will be done step by step?
  • Analytic parameters and scheduling – How frequently analytic will run and does it need any predefined variables that can help scheduling or future reruns?
  • Results Definition – How should generated exception looks like?

Finalized functional requirements should contain all information required and be sufficient for script development

Analytic owner based on finalized functional requirements should start drafting Work Instructions for exception resolution to confirm definition of exceptions. That will help later with User Acceptance Testing and serves as additional validation of analytic logic.

I find it quite beneficial, when gathering requirements is done directly by or together with person who will be developing analytic scripts. Final approval of specification should be provided by analytic owner and should be a starting point for preparation of data request and development

Analytics are not fixed in stone and same applies to its documentation. Any changes performed on analytic logic and definition in next phases (development and implementation) should be properly reflected in revised functional specification document.

Source data

Based on data request proper source system should be selected. Data should be provided from systems being closest to point of data entry (transactional systems rather than any data marts or warehouses). Other systems might be considered if there is no aggregation of data and source data completeness might be ensured.

There are two major methods for obtaining source data – extracts and direct access to data

  • Extracts – files with source data generated automatically by source system and being transferred to Continuous Monitoring platform. Responsibility for preparation of extracts and therefore interface stays usually with team who is responsible for source system. That means significantly reduced risk of getting incorrect or incomplete data due to lack of knowledge. Unfortunately, that also means that any changes involving adjustment of source data last much longer require involvement of several teams and sometimes additional funding
  • Direct access to data – continuous monitoring team receives possibility of direct access to data via dedicated method like ODBC or Direct Link. Many prefer this solution as it allows for direct control over source data generation process. Downside is that it requires extract scripting language knowledge as well as understanding of database table layout and content.

Data extract design

Without regard to method used to obtain data I would recommend preparing data request or data need specification. It should be prepared based on functional requirements. Must consider and include all data elements needed considering fields needed for

  • Scope inclusions and exclusions
  • Analytic logic performance
  • Exception elements requested
  • Potential future analytics expansion

Document should contain detailed definition of all data extracts required for analytics together with pictured correlation between analytics and data extracts. Description of each extract should include information about

  • Data extract subject
  • Requested extract scope, frequency and timing
  • Data fields requested with possibility to align them directly with system tables / fields / extract elements by person designing extracts from source system

Historical data

For every analytic there is a need to evaluate point of start in time for analytic coverage and potential coverage of historical data.  If analytic covers any new document introduced to the system (i.e. sales order) it should be considered if already open existing documents should be also analyzed as one time activity during first run.

If analytics uses historical data decision should be made if complete history data should be acquired prior to first run, or build gradually during each next run of analytic. Ability of analytic owner to resolve more historical exceptions should be taken into consideration while taking such decision.

Increase of historical coverage can potentially increase reliance and credibility of analytic.

Coverage completeness

Every analytic that runs based on documents that can be frequently changed, should assess if it should run on just complete document extract at specified time or also should take into consideration all document changes done between runs of analytic.

Application and change controls present in source system should be taken into consideration while taking this decision.

Increase of coverage to all source data changes can potentially increase reliance and credibility of analytic.

Design Phase Significance

Time spent for proper preparation, selection and high quality documentation will benefit in next phases. Decisions made at this stage are critical to make sure that you get all potential gains from implementation of Continuous Monitoring in your organization. Documentation prepared will make maintenance of analytics much easier. Specifications will also help if you plan your auditors to rely on your analytics and exceptions. I hope this article and my lessons learned will make your journey towards Continuous Monitoring much easier.

Leave a Reply