TY - JOUR TI - Essays toward the development, implementation, testing, and automation of risk-based full population general ledger auditing systems DO - https://doi.org/doi:10.7282/t3-12c7-s771 PY - 2020 AB - A business’ general ledger (GL) contains a complete picture of all of its transactions and business dealings. In a modern era consisting of large multi-national corporations, this consists of millions, if not billions, of transactions within a day. In such an environment, the GL record system provides valuable insight into the day-to-day functionality of a business. In addition to legitimate transactions, the GL is likely to contain evidence of fraudulent or erroneous accounting practices should they exist. The importance of this is not lost on audit regulators. In 2002 the American Institute of Certified Public Accountants (AICPA) released Statement on Auditing Standard (SAS) no. 99, which requires auditors to “…design procedures to test the appropriateness of journal entries recorded in the general ledger and other adjustments.” (AICPA 2002). Traditionally, auditors have relied on sampling techniques to test large populations of data (Hall et al. 2000). However, this may not be the most effective approach to detecting low-frequency high-risk cases of fraud within the population (Neter & Loebbecke 1975). Academia has been quick to try and call for new analytic based methods to reconcile this and similar issues related to big data (Applebaum et al. 2017, Vasarhelyi et al. 2015). The following essays aim to close this research gap within the GL space by providing a risk evaluation framework and full population methodology for examining and raking individual GL updates based on their riskiness. This collection of essays is designed to provide insight into a new approach for testing large populations of data and extend it into the GL space. Academics have chosen to move away from traditional sampling techniques in cases involving big data, instead advocating for selective suspicion scoring methodologies (No et al. 2018, Issa 2013, Kim 2011). These methodologies rank records based on a suspicion score derived from the application of analytic techniques to the total population. This approach enables an auditor to conduct a test of details examination on the same number of records as they would in a traditional sample. Unlike a traditional sample, however, these records will constitute the riskiest elements within the population. While such methodologies have been proven successful (Kim & Kogan 2012, No et al. 2018, Lee et al. 2019), such an approach has never been extended into the GL space. The GL is particularly important as it includes a comprehensive catalog of virtually all transaction events. To this end, these essays first establish an approach for determining the appropriate tests to apply to each individual GL dataset. This assessment breaks the GL down into risk categories with associated test recommendations. The remaining essays apply this methodology to a variety of audit environments. One application focuses on internal audit application to a large multi-national bank. The other essay focuses on working with external auditors to apply the approach to auditing a multi-national manufacturer. The first original research essay provides a framework for evaluating the risks present in each individual GL called the General Ledger Adjustment Risk Evaluation (GLARE) framework. This risk assessment is used to determine the appropriate analytic techniques to use based on the risks being targeted by an auditor. This essay breaks GL risk down into seven key risk categories. Each category is then discussed in detail, and example tests and analytic procedures are discussed and suggested. The purpose of this essay is to provide practitioners and academics with insight and guidance into how to successfully evaluate and test different GL risks. This essay uses a combination of past literature and novel techniques to suggest potential solutions to mitigating different risk patterns present in a company or their GL records. It is crucial to the later implementation of a suspicion-scoring model that an auditor has a complete understanding of how to target specific risks. Without this understanding, it is likely that the methodology may be misapplied, in which case suspicious or risky records may not be detected. In addition to the framework itself, it is applied to a test dataset in order to generate ten potential risk problems, each of which is rooted in a GLARE risk category and audit assertions. Auditors provide feedback on their perceived importance of these risks to ensure that GLARE identifies risks that matter to auditors. The second original research essay applies GLARE to aid in the construction and application of a full population filtering methodology in an internal audit environment. Manual entry GL data from a large multi-national bank is used to illustrate that this methodology is effective in detecting accounting irregularities and even an instance of fraud. In this case study, the company utilizes several internal ledger systems. This illustrates that these methodologies are robust to a variety of different datasets and structures when it comes to detecting issues. Additionally, each applied test was evaluated based on the results and insights that it provided. Once it is established that GLARE and full population filtering methodologies can successfully be applied to GL update datasets, the third original research essay applies a suspicion ranking methodology to examine the data of a large multi-national manufacturer from an external audit point of view. In this case, GLARE is utilized to build the filters. External audit partners were also consulted for the application of a formal methodology designed to rank suspicious adjustments to the GL for a final test of details sample. The resulting sample was designed to be comparable in size to a traditional audit but reflect the riskiest elements of the GL entries. Additionally, a shorter forward-looking essay is included. This essay is designed to position this research in such a way that it is accessible to future academics wishing to apply it in a more automated continuous context. This essay discusses adaptations that must be made to make the utilization of the methodologies discussed elsewhere in this dissertation so that they fit appropriately within the continuous audit paradigm. In addition, an outline for a suspicion scoring dashboard to be used by auditors is developed. This is also illustrated in mockup renderings of how this would look from an auditor’s perspective. Together these essays are designed to move the audit of GL data in a more effective direction. In an era when traditional sampling techniques may not be as effective in populations that number billions of records per year, suspicion scoring may provide a fruitful alternative. The development of guidance, as well as proven success in the application of such approaches in a variety of different GL data sets, as seen in these essays, will hopefully aid in the generation of new standards and practices in the audit industry. KW - Audit KW - Business and Science LA - English ER -