Article Type : Research Article
Authors : Ahmed Lotfy AE and Rahman Abdel Fattah HA
Keywords : Ethical fragility; Professional judgment; Algorithm-driven auditing; Behavioral auditing; Cognitive perspective
Purpose:
This study examines how algorithm-driven auditing technologies reshape
auditors’ professional judgment, with particular emphasis on ethical fragility
as an emerging behavioral–cognitive outcome of auditor–algorithm interaction.
Moving beyond descriptive accounts of digital audit tools, the study addresses
a critical gap in the literature by empirically investigating the ethical and
cognitive implications of algorithmic reliance in contemporary audit practice.
Methodology:
The research adopts a quantitative empirical approach grounded in behavioral
and cognitive auditing theory. It analyzes the relationships among algorithmic
reliance, ethical fragility, and the quality of professional judgment in audit
decision-making contexts.
Design
and Approach: A conceptual model is developed linking key characteristics of
algorithm-driven auditing—namely reliance intensity, transparency, and
explainability—to auditors’ professional judgment, with ethical fragility
modeled as a mediating construct. The model is tested using field data
collected from professional auditors, employing structural equation modeling to
assess the proposed hypothesis.
Findings:
The findings indicate that increased reliance on algorithmic tools does not
inherently enhance professional judgment quality. Instead, excessive or opaque
reliance may weaken ethical decision-making when not supported by robust
professional governance mechanisms. Ethical fragility emerges as a critical
mediating factor explaining how algorithm-driven environments influence
auditors’ judgment processes.
Originality
and Value: This study introduces ethical fragility as a novel theoretical
construct in digital auditing research and offers an integrated
behavioral–cognitive explanation of auditor judgment under algorithmic
influence.
Theoretical,
Practical, and Social Implications: Theoretically, the study advances
behavioral auditing literature by integrating ethical fragility in-to models of
professional judgment. Practically, it provides insights for standard setters
and audit firms regarding ethics codes and quality management systems in
digital audits. Socially, the findings contribute to sustaining public trust in
the auditing profession in algorithm-driven environments.
Background and context
The
auditing profession is undergoing a profound transformation driven by the rapid
integration of algorithm-based technologies, advanced analytics, and artificial
intelligence into audit processes. Algorithm-driven auditing systems are
increasingly employed to automate risk assessment, anomaly detection, and
substantive testing, fundamentally altering how audit evidence is generated,
evaluated, and interpreted [1,2]. This transformation is not merely technical;
rather, it reshapes the cognitive environment in which auditors exercise
profession-al judgment. Prior auditing research has long emphasized that
professional judgment lies at the core of audit quality, particularly in
contexts characterized by uncertainty, estimation complexity, and managerial
discretion [3,4]. However, algorithm-driven tools introduce new decision
architectures in which auditors increasingly rely on system-generated outputs
rather than solely on personal expertise and professional skepticism [5]. While
such tools promise efficiency gains and enhanced detection capabilities, they
also create novel ethical and behavioral challenges that remain insufficiently
understood.
Recent
studies in behavioral auditing suggest that excessive reliance on automated
systems may lead to cognitive complacency, reduced critical evaluation, and
overconfidence in algorithmic out-puts [6,7]. These effects are particularly
pronounced when algorithms operate as “black boxes,” limiting auditors’ ability
to understand, challenge, or override system recommendations [8]. Consequently,
the auditor’s ethical responsibility becomes increasingly blurred, as
accountability for decisions is partially transferred to technological systems
embedded within audit workflows [9]. Within this evolving context, ethical
considerations are no longer confined to traditional issues of independence,
integrity, or objectivity. Instead, they extend to questions of algorithmic
bias, transparency, explainability, and the moral implications of delegating
judgment to intelligent systems [10,11]. Despite growing regulatory attention
to audit technology, professional standards and ethics codes continue to assume
a predominantly human-centered judgment process, offering limited guidance on
ethical conduct in algorithm-mediated decision environments [12].
Research problem statement
Although
the literature on digital and continuous auditing has expanded significantly,
it remains largely focused on technological capabilities and efficiency
outcomes, with comparatively limited attention to the ethical and cognitive
consequences of algorithmic reliance [13]. In particular, existing research has
not sufficiently explained how algorithm-driven auditing reshapes auditors’
ethical judgment processes or alters the quality of professional
decision-making. A critical gap exists in understanding the subtle yet
consequential phenomenon whereby auditors, while formally adhering to
professional standards, may experience a weakening of ethical sensitivity due
to excessive or uncritical reliance on algorithmic systems. This
phenomenon—conceptualized in this study as ethical fragility—reflects a state
in which ethical judgment becomes more susceptible to contextual pressures,
system authority, and cognitive shortcuts embedded in digital audit
environments [14,15]. Moreover, prior studies rarely integrate behavioral and
cognitive perspectives to explain how ethical fragility mediates the
relationship between algorithm-driven auditing and professional judgment
quality. As a result, regulators, standard setters, and audit firms lack
empirically grounded insights into whether algorithmic tools genuinely enhance
ethical decision-making or inadvertently under-mine auditors’ moral
responsibility and professional skepticism [16,17]. Accordingly, the central
research problem addressed in this study is the absence of robust empirical
evidence explaining how and under what conditions algorithm-driven auditing
affects auditors’ ethical judgment and professional decision quality through
behavioral–cognitive mechanisms.
Research objectives and research
questions
Building
on the identified research gap, this study seeks to advance understanding of
how algorithm-driven auditing reshapes auditors’ professional judgment through
behavioral and cognitive mechanisms. Specifically, the study aims to move
beyond technology-centric explanations by examining the ethical dimensions of
auditor–algorithm interaction and their implications for judgment quality. The
primary objectives of the study are threefold. First, it aims to empirically
examine the impact of algorithm-driven auditing on auditors’ professional
judgment quality. Second, it seeks to conceptualize and operationalize ethical
fragility as a behavioral–cognitive construct that captures auditors’
susceptibility to ethical weakening in algorithm-mediated decision environments.
Third, the study investigates the mediating role of ethical fragility in
explaining how algorithmic reliance influences professional judgment outcomes.
In line with these objectives, the study addresses the following research questions:
Significance of the study
The
significance of this study is multifaceted. From a theoretical perspective, it
contributes to the behavioral auditing literature by integrating ethical
reasoning into models of professional judgment in digital audit contexts
[18,19]. By introducing ethical fragility as a dis-tinct construct, the study
extends prior work that has primarily examined judgment accuracy and efficiency
while overlooking ethical vulnerability. From a professional and regulatory
perspective, the study addresses growing concerns among standard setters
regarding the governance of audit technologies and the preservation of auditor
ac-countability in increasingly automated environments [20,21]. As audit firms
rapidly deploy advanced analytics and AI-based tools, understanding their
ethical implications be-comes essential for designing effective ethics codes,
quality management systems, and training pro-grams. At a broader societal
level, the study is significant because public trust in the auditing profession
depends not only on technical competence but also on the ethical soundness of
professional judgment. In algorithm-driven environments, failures of ethical
judgment may be less visible yet more systemic, potentially undermining
confidence in audit outcomes and financial reporting credibility [22].
Research contributions
This
study offers several original contributions. First, it introduces ethical
fragility as a novel theoretical construct that captures the ethical
susceptibility of auditors operating in algorithm-driven environments. Second,
it provides an integrated behavioral–cognitive framework explaining how
algorithmic reliance reshapes professional judgment processes. Third, the study
delivers empirical evidence clarifying the conditions under which audit
technologies enhance or impair ethical judgment quality. Finally, it offers
actionable insights for regulators and audit firms seeking to balance
technological innovation with ethical responsibility.
Structure of the paper
The
remainder of the paper is organized as follows. Section 2 reviews the relevant
literature and develops the theoretical foundations of the study. Section 3
presents the proposed conceptual framework and research hypotheses. Section 4
outlines the research methodology and comparative study design. Section 5
reports and analyzes the empirical results. Section 6 discusses the findings,
implications, and recommendations. Section 7 concludes the study and outlines
directions for future research.
Algorithm-driven auditing: concept,
evolution, and implications
Algorithm-driven
auditing represents a structural shift in audit practice whereby
decision-support algorithms, advanced analytics, and artificial intelligence
are embedded directly into audit work-flows. Unlike traditional
computer-assisted audit techniques, algorithm-driven systems do not merely
support auditors’ tasks but increasingly shape how audit risks are identified,
prioritized, and evaluated [23,24]. These systems leverage large datasets,
pattern-recognition capabilities, and predictive models to generate audit
insights that often exceed human processing capacity. The evolution of
algorithm-driven auditing can be traced to three overlapping phases. The first
phase emphasized automation and efficiency, focusing on replacing manual
procedures with rule-based systems [25]. The second phase introduced advanced
analytics, enabling auditors to examine full populations rather than samples
and to detect anomalies using statistical and ma-chine-learning techniques. The
third and current phase involves cognitive automation, in which algorithms not
only analyze data but also recommend judgments and courses of action, thereby
influencing auditors’ decision architectures [26].
While
the technical benefits of algorithm-driven auditing are widely acknowledged,
the literature increasingly recognizes that these technologies fundamentally
alter the behavioral context of audit judgment. Algorithms introduce new
sources of authority into the audit process, potentially displacing
professional skepticism with system trust [27]. As a result, auditors may
become less inclined to challenge outputs generated by sophisticated systems,
particularly when those systems are perceived as objective, neutral, or
superior to human judgment [28]. Moreover, algorithm-driven auditing reshapes
accountability structures within audit engagements. Decision outcomes are no
longer attributable solely to individual auditors but emerge from complex
interactions between human judgment and algorithmic recommendations [29]. This
diffusion of responsibility raises ethical concerns regarding who is ultimately
accountable for audit failures, especially when algorithms operate as opaque
“black boxes” with limited explainability [30,31].
Professional judgment in digital
audit environments
Professional
judgment has long been recognized as the cornerstone of audit quality,
particularly in environments characterized by uncertainty, ambiguity, and
managerial discretion [32,33]. Classical audit judgment research conceptualizes
judgment quality as a function of expertise, task complexity, and environmental
constraints [34]. However, digital audit environments introduce new cognitive
and ethical dynamics that challenge these traditional models. From a behavioral
perspective, algorithm-driven tools alter auditors’ information processing by
changing how evidence is presented, aggregated, and prioritized. Rather than
actively constructing judgments from raw evidence, auditors increasingly
evaluate system-generated outputs, which may reduce cognitive effort while
simultaneously increasing reliance on automated cues [35]. Behavioral research
suggests that such shifts can lead to automation bias, whereby individuals
disproportionately favor algorithmic recommendations even when contradictory
evidence is available [36]. In audit contexts, automation bias may manifest as
reduced skepticism, diminished error detection, and premature judgment closure
[37,38]. These effects are exacerbated when auditors face high cognitive load
or time pressure, conditions commonly associated with technologically intensive
audit engagements [39]. Consequently, algorithm-driven environments may
unintentionally weaken the very judgment processes they are designed to
support. Importantly, professional judgment in digital auditing cannot be fully
understood without considering its ethical dimension. Ethical decision-making
models emphasize that judgment quality is shaped not only by technical
competence but also by moral awareness, ethical sensitivity, and con-textual
pressures [40]. In algorithm-mediated settings, ethical awareness may be
diminished as auditors perceive decisions to be system-driven rather than
personally constructed, thereby reducing moral engagement with judgment
outcomes [41] (Table 1). Presents evolution of auditing toward algorithm –
driven environments.
Recent
advances in behavioral auditing research emphasize that professional judgment
is not a static capability but an adaptive cognitive process shaped by
environmental cues and decision architectures [42,43]. In algorithm-driven
audit environments, these architectures are increasingly designed by system
developers rather than auditors themselves, subtly guiding attention, framing
alternatives, and influencing evaluative criteria. One critical concern
identified in the literature is the shift from active judgment construction to
judgment validation. Rather than independently assessing evidence, auditors may
focus on validating or rationalizing algorithmic outputs, especially when those
outputs are perceived as technologically sophisticated or statistically
superior. This validation-oriented behavior aligns with motivated reasoning
theory, which suggests that individuals tend to seek confirmatory information
that aligns with salient cues or authoritative sources. Empirical studies
further indicate that auditors’ reliance on algorithmic tools is contingent on
perceived system reliability and institutional endorsement. When audit
technologies are mandated or strongly encouraged by firms, auditors are more
likely to defer judgment authority to systems, even in the presence of
contradictory evidence [44]. Such deference may erode individual accountability
and weaken the internalization of ethical responsibility for audit outcomes.
Cognitive load theory also provides important insights into professional
judgment under algorithmic influence. Algorithm-driven audits often involve
complex interfaces, large data volumes, and continuous monitoring systems, all
of which can increase cognitive burden. Under high cognitive load, auditors may
rely more heavily on heuristic shortcuts and automated recommendations, thereby
increasing susceptibility to judgment biases and ethical oversights [45].
Behavioral and cognitive
foundations of ethical judgment
Understanding
ethical judgment in algorithm-driven auditing requires integrating insights
from behavioral ethics and cognitive psychology as shown in (Table 2).
Behavioral ethics research demonstrates that ethical failures often arise not
from deliberate misconduct but from subtle situational pressures that impair
moral awareness and ethical reasoning [46]. In professional settings,
individuals may unintentionally engage in unethical behavior while perceiving
their actions as compliant with formal rules. Dual-process theories of
cognition provide a useful framework for explaining ethical judgment under
algorithmic conditions. These theories distinguish between intuitive, fast
decision processes (System 1) and deliberative, reflective processes (System 2)
[47]. Algorithm-driven environments tend to amplify System 1 reliance by
presenting pre-processed recommendations that reduce the need for deliberate
reasoning. While such efficiency gains may enhance productivity, they also risk
bypassing reflective ethical evaluation.
Moreover,
ethical decision-making models emphasize the role of moral sensitivity—the
ability to recognize ethical dimensions in decision situations—as a
prerequisite for ethical judgment [48]. In algorithm-mediated audits, moral
sensitivity may be diminished as decisions appear technical rather than
ethical, framed as system outputs rather than personal judgments. This framing
effect can obscure ethical consequences and reduce auditors’ engagement with
moral reasoning. Another relevant stream of research examines conflicts of
interest and professional bias. Even in the absence of explicit incentives,
auditors may experience unconscious biases that align their judgments with
organizational goals or system recommendations [49]. Algorithmic systems, when
embedded within firm-level performance metrics, may implicitly reinforce such
biases by privileging efficiency and consistency over ethical deliberation.
Collectively, these behavioral and cognitive foundations suggest that ethical
judgment in algorithm driven auditing is highly context-dependent and
vulnerable to subtle influences. Rather than eliminating ethical risk,
algorithmic tools may reconfigure how ethical issues are perceived, evaluated,
and resolved.
Ethics, technology, and algorithmic
reliance
The
growing integration of advanced technologies into professional decision-making
has prompt-ed renewed scholarly attention to the ethical implications of
algorithmic reliance. In auditing, algorithm-driven systems increasingly
mediate how ethical considerations are perceived and enacted, often reshaping
the boundary between technical compliance and moral responsibility [50]. Rather
than eliminating ethical judgment, technology reconfigures its locus, subtly
influencing how auditors recognize, interpret, and resolve ethical dilemmas. A
central concern in this literature is the phenomenon of algorithmic trust. When
algorithms are perceived as objective, consistent, and unbiased, users may
attribute greater legitimacy to system outputs than to their own professional
reasoning. In audit contexts, such trust can displace professional skepticism,
particularly when auditors lack sufficient transparency into algorithmic logic
or data inputs. Ethical judgment thus becomes indirectly shaped by system
design choices, including model assumptions, thresholds, and embedded
priorities. Research on conflicts of interest further suggests that algorithmic
systems may unintentionally reinforce organizational biases. Even when auditors
are formally independent, algorithmic tools developed or selected by audit
firms may reflect implicit preferences for efficiency, client retention, or
risk minimization. These preferences can subtly influence ethical evaluations,
making certain judgments appear technically justified while obscuring their
ethical consequences.
Moreover,
ethical decision-making in technology-mediated environments is strongly
influenced by framing effects. When audit decisions are framed as technical
outputs of sophisticated systems, auditors may perceive ethical issues as
external to their personal responsibility. This moral distancing can weaken
ethical engagement, even in the absence of deliberate misconduct, aligning with
broader findings in behavioral ethics that highlight the unintentional nature
of many ethical failures. The literature therefore converges on the view that
algorithmic reliance introduces a qualitatively different ethical risk profile.
Ethical challenges arise not from overt rule violations but from subtle shifts
in judgment authority, responsibility attribution, and moral awareness. These
insights under-score the need for conceptual frameworks that explicitly
integrate ethical considerations into models of professional judgment in
algorithm-driven auditing.
Synthesis and theoretical
positioning
Synthesizing
the reviewed literature reveals several critical insights that inform the
theoretical positioning of this study. First, algorithm-driven auditing
represents more than a technological enhancement; it constitutes a
transformation of the cognitive and ethical environment in which professional
judgment is exercised. By altering information flows, decision architectures,
and accountability structures, algorithms reshape how auditors engage with
evidence and ethical considerations. Second, professional judgment in digital
audit environments is increasingly influenced by behavioral and cognitive
mechanisms such as automation bias, cognitive load, motivated reasoning, and
framing effects. These mechanisms interact with algorithmic systems in ways
that may weaken ethical sensitivity and reduce reflective judgment,
particularly under conditions of high reliance and limited system transparency.
Third, the ethical dimension of auditor judgment has been under-theorized in
prior research on audit technology. While existing studies acknowledge ethical
risks, they often treat ethics as a peripheral concern rather than as an
integral component of judgment processes [51,52]. The literature lacks a
cohesive construct capable of capturing the subtle ethical vulnerability that
emerges in algorithm-mediated decision contexts. To address this gap, this
study advances the concept of ethical fragility as a behavioral–cognitive
condition reflecting auditors’ increased susceptibility to ethical weakening
under algorithmic influence. Ethical fragility does not imply ethical failure
or intentional misconduct; rather, it captures a state in which ethical
judgment becomes more context-sensitive, more dependent on system cues, and
less anchored in reflective moral reasoning. Positioned at the intersection of
behavioral auditing, cognitive psychology, and professional ethics, ethical
fragility provides a theoretically grounded mechanism through which
algorithm-driven auditing may affect professional judgment quality. By modeling
ethical fragility as a mediating con-struct, this study integrates fragmented
streams of prior research into a unified explanatory frame-work. Accordingly,
the theoretical foundation developed in this chapter directly informs the
proposed conceptual framework and hypotheses presented in the next chapter. The
framework builds on established theories of judgment and decision-making while
extending them to account for the ethical complexities introduced by
algorithm-driven auditing environments.
Conceptual foundations of the
proposed framework
The
proposed framework builds on the premise that algorithm-driven auditing
reshapes professional judgment not only through enhanced information processing
but also through behavioral and ethical mechanisms. Drawing on theories of
judgment and decision-making, behavioral auditing, and ethical reasoning, the
framework conceptualizes auditors’ professional judgment as an outcome of
interactions between technological reliance, cognitive processing, and ethical
sensitivity [53,54]. Traditional audit judgment models assume that auditors
actively integrate evidence, professional standards, and ethical principles
when forming judgments. However, algorithm-driven environments alter this
process by embedding decision rules, prioritization logics, and risk signals directly
into audit workflows. As a result, judgment authority becomes partially
transferred from the auditor to the system, changing how auditors perceive
responsibility and control over decision outcomes [55]. Behavioral theories
suggest that such shifts in decision architecture influence auditors’ reliance
patterns and cognitive engagement. According to social cognitive theory,
individuals adapt their behavior based on perceived efficacy and external
guidance, particularly when tasks are complex or ambiguous [56]. In
algorithm-driven auditing, perceived system competence may in-crease reliance
while simultaneously reducing auditors’ motivation to engage in reflective
judgment processes. Moreover, ethical decision-making theories emphasize that
ethical judgment is highly sensitive to contextual framing and situational cues
[57]. When audit decisions are framed as outputs of sophisticated systems,
ethical considerations may be perceived as secondary to technical compliance,
thereby increasing susceptibility to ethical weakening. Integrating these
perspectives, the proposed framework positions ethical fragility as a central
behavioral–cognitive mechanism through which algorithm-driven auditing affects
professional judgment quality.
Algorithmic Reliance
Algorithmic
reliance refers to the extent to which auditors depend on algorithm-driven
tools when assessing risks, evaluating evidence, and forming audit judgments.
Prior research indicates that reliance increases when systems are perceived as
reliable, authoritative, or institutionally endorsed. In the proposed
framework, algorithmic reliance is conceptualized as a continuous construct
reflecting both frequency of use and decisional dependence as shown in (Table
3).
Ethical Fragility
Ethical
fragility is defined as a behavioral–cognitive state in which auditors’ ethical
judgment be-comes more susceptible to contextual pressures, system authority,
and cognitive shortcuts in algorithm-mediated environments. Unlike intentional
ethical violations, ethical fragility captures unintentional ethical weakening
arising from reduced moral awareness, diffusion of responsibility, and
over-reliance on automated cues. This con-struct extends prior work on ethical
blind spots by situating ethical vulnerability within digital audit contexts.
Professional Judgment Quality
Professional
judgment quality reflects the extent to which auditors’ judgments are
well-reasoned, ethically sound, and consistent with professional standards
under conditions of uncertainty. Consistent with prior auditing research,
judgment quality is viewed as a multidimensional construct encompassing
accuracy, consistency, and ethical appropriateness [58].
Development of direct hypotheses
(Partial)
Algorithmic
Reliance and Professional Judgment Quality
The
literature presents mixed evidence regarding the direct effect of algorithmic
reliance on professional judgment. On one hand, algorithm-driven tools can
enhance judgment quality by improving information completeness, consistency,
and analytical depth. On the other hand, excessive reliance may lead to
automation bias, reduced skepticism, and diminished critical evaluation of
evidence. From a behavioral perspective, auditors may defer judgment authority
to algorithms when systems are perceived as superior decision-makers, thereby
weakening active engagement in judgment formation. Accordingly, the net effect
of algorithmic reliance on judgment quality is theoretically ambiguous and
contingent on intervening mechanisms. Nevertheless, absent ethical and cognitive
safeguards, higher levels of algorithmic reliance are expected to negatively
affect professional judgment quality by reducing auditors’ critical evaluation
and ethical engagement. This leads to the following hypothesis:
H1:
Algorithmic reliance is negatively associated with auditors’ professional
judgment quality.
Behavioral
and cognitive theories suggest that increased reliance on authoritative
decision aids may unintentionally weaken individuals’ ethical sensitivity. In
algorithm-driven auditing, auditors may perceive system-generated outputs as
objective and normatively correct, thereby reducing their inclination to
critically reflect on ethical implications [59]. Such reliance can shift
responsibility attribution away from the individual auditor toward the
technological system, fostering moral distancing and reduced ethical
engagement. Research in behavioral ethics further indicates that ethical
weakening often arises from situational factors rather than intentional
misconduct [60]. When auditors operate within highly automated environments,
ethical issues may be reframed as technical problems, diminishing moral
awareness and increasing susceptibility to ethical blind spots. Accordingly,
higher levels of algorithmic reliance are expected to increase auditors’
ethical fragility.
H2:
Algorithmic reliance is positively associated with ethical fragility.
Ethical fragility and professional
judgment quality
Ethical
fragility is expected to have a direct adverse effect on professional judgment
quality. Auditors experiencing reduced moral awareness and heightened
dependence on system cues may be less likely to engage in reflective reasoning,
challenge questionable outputs, or fully consider ethical consequences [61].
Prior research demonstrates that diminished ethical sensitivity can impair
judgment accuracy and consistency, particularly in complex decision
environments. From a cognitive standpoint, ethical fragility aligns with
increased reliance on heuristic processing, which may be efficient but less
robust in ethically ambiguous situations. Consequently, auditors with higher
levels of ethical fragility are expected to exhibit lower-quality professional
judgments.
H3:
Ethical fragility is negatively associated with auditors’ professional judgment
quality.
Mediating role of ethical fragility
Building
on mediation theory, ethical fragility is positioned as a key mechanism through
which algorithmic reliance affects professional judgment quality. Classical
mediation models emphasize that the effect of an independent variable on an
outcome may operate indirectly through an intervening construct that captures
the underlying behavioral process [62]. In the context of algorithm-driven
auditing, ethical fragility represents such an intervening process by
translating technological reliance into ethical and cognitive consequences.
Technology acceptance models suggest that reliance on systems is influenced by
perceived use-fulness and ease of use, which can increase dependence on
automated outputs [63,64]. While such dependence may enhance efficiency, it may
also weaken auditors’ ethical engagement when system outputs are treated as
default decisions rather than inputs for critical evaluation. Ethical fragility
thus provides a theoretically grounded explanation for why algorithmic reliance
does not uniformly enhance judgment quality. Empirical studies in auditing and
organizational behavior support the plausibility of this mediation. Prior
research has shown that conflicts of interest, authority cues, and performance
pressures can indirectly impair judgment quality through ethical and cognitive
mechanisms [65]. Extending this logic, the proposed framework hypothesizes that
ethical fragility mediates the relationship between algorithmic reliance and
professional judgment quality [66].
H4:
Ethical fragility mediates the relationship between algorithmic reliance and
auditors’ professional judgment quality.
Summary of the conceptual model and
hypotheses
The
proposed framework integrates insights from behavioral auditing, cognitive
psychology, and ethical decision-making to explain how algorithm-driven
auditing reshapes professional judgment as shown in (Table 4). Algorithmic
reliance is conceptualized as a primary antecedent that influences both ethical
fragility and professional judgment quality. Ethical fragility, in turn, serves
as a central mediating mechanism linking technological reliance to judgment
outcomes. This integrated framework responds directly to gaps identified in
prior literature by explicitly modeling the ethical dimension of
auditor–algorithm interaction. Rather than assuming that advanced technologies
inherently improve judgment quality, the framework highlights conditions un-der
which algorithmic reliance may inadvertently undermine ethical engagement and
professional responsibility.
Research design and methodological
approach
This
study adopts a quantitative, empirical research design grounded in behavioral
auditing and ethical decision-making literature. The chosen design is
appropriate for testing the causal relation-ships proposed in the conceptual
framework and for examining the mediating role of ethical fragility in
algorithm-driven auditing contexts. Quantitative approaches are particularly
suitable for theory testing and hypothesis validation where latent constructs
and complex interrelationships are involved [67]. Given the study’s focus on
professional judgment, ethical vulnerability, and technology reliance, the
research design integrates elements of behavioral research with structural
modeling techniques. This integration enables the simultaneous examination of
measurement validity and structural relationships among constructs, which is
essential when studying psychological and ethical variables that cannot be
observed directly [68]. To analyze the proposed relationships, the study
employs partial least squares structural equation modeling (PLS-SEM). PLS-SEM
is well suited for exploratory and theory-extension research, particularly when
models include mediating variables and when the primary objective is prediction
rather than strict model fit [69]. Moreover, PLS-SEM is robust to non-normal
data distributions and is appropriate for studies involving professional
respondents where sample sizes may be constrained [70]. In addition to the main
empirical analysis, the research incorporates a comparative design to ex-amine
whether the proposed relationships differ across distinct professional
contexts. Comparative analysis enhances the external validity of the findings
by allowing the assessment of contextual effects, such as differences in
organizational environments or levels of technological maturity [71].
Population, sample, and data
collection
The
target population of the study consists of professional auditors involved in
external audit engagements where algorithm-driven tools and advanced analytics
are used as part of the audit process. This population is particularly relevant
given the study’s focus on professional judgment under technologically mediated
conditions. A purposive sampling strategy was employed to ensure that
respondents possess sufficient experience with algorithm-driven auditing
systems. Prior methodological research suggests that purposive sampling is
appropriate when the research objective requires participants with specific
professional exposure and expertise [72]. Eligible respondents were required to
meet two criteria: (1) active involvement in audit engagements using digital or
algorithmic tools, and (2) a minimum level of professional experience
sufficient to exercise independent judgment. Data were collected using a
structured questionnaire distributed electronically to practicing auditors.
Electronic data collection was chosen to facilitate access to geographically
dispersed respond-ents and to enhance response efficiency [73]. To mitigate
potential non-response bias, follow-up reminders were issued, and participation
was voluntary and anonymous. The final sample size was assessed against
methodological guidelines for structural equation modeling. Prior research
indicates that PLS-SEM requires a minimum sample size based on the maximum
number of structural paths pointing to any latent construct. The achieved
sample size exceeded this minimum threshold, supporting the adequacy of the
data for subsequent analysis.
Measurement of variables
All
latent constructs in the study were measured using multi-item scales adapted
from prior vali-dated research and modified to reflect the context of
algorithm-driven auditing. The use of established scales enhances construct
validity and facilitates comparability with prior studies [74].
Algorithmic Reliance
Algorithmic
reliance was measured using a multi-item scale capturing the extent to which
auditors depend on algorithm-driven tools when assessing audit risks,
evaluating evidence, and forming professional judgments. Scale items reflect
both frequency of use and decisional dependence on system outputs. Prior
research emphasizes that reliance is not merely technological usage but a
behavioral orientation toward system authority [75,76]. Respondents were asked
to indicate their level of agreement with statements describing reliance on
algorithmic recommendations using a Likert-type scale. Higher scores indicate
greater reliance on algorithm-driven auditing systems.
Ethical Fragility
Ethical
fragility was operationalized as a latent construct reflecting reduced ethical
sensitivity, moral disengagement, and diffusion of responsibility in
algorithm-mediated decision environments. Given the novelty of the construct,
scale development followed established guidelines for construct specification
and content validity [77]. Items were adapted from behavioral ethics literature
and contextualized to audit decision-making scenarios involving algorithmic
tools. This approach aligns with recommendations for measuring ethically
sensitive constructs in professional contexts [78] (Table 5). Presents
Measurement Constructs and Scale Sources Professional judgment quality was
measured as a multidimensional construct reflecting the soundness, consistency,
and ethical appropriateness of auditors’ decisions under conditions of
un-certainty. Measurement items were adapted from established audit judgment
research and contextualized to algorithm-driven audit scenarios to ensure
relevance [79]. The scale captures auditors’ ability to critically evaluate
evidence, appropriately override algorithmic recommendations when necessary,
and maintain professional skepticism. Consistent with best practices in
measurement development, all scale items were pre-tested with a small group of
experienced auditors to ensure clarity and contextual fit. Responses were
recorded using a five-point Likert scale, with higher values indicating higher
judgment quality.
Data analysis techniques and
validity assessment
Data
analysis was conducted using PLS-SEM, following a two-stage approach that
distinguishes between the assessment of the measurement model and the
evaluation of the structural model. This approach allows for rigorous testing
of construct reliability and validity prior to hypothesis testing.
Measurement Model Assessment
Internal
consistency reliability was evaluated using Cronbach’s alpha and composite
reliability, with values exceeding recommended thresholds indicating
satisfactory reliability. Convergent validity was assessed through average
variance extracted (AVE), ensuring that each construct explains a sufficient
proportion of variance in its indicators [80]. Discriminant validity was
examined using both the Fornell–Larcker criterion and the heterotraitmonotrait
(HTMT) ratio. The HTMT approach is particularly robust in detecting
discriminant validity issues in structural models with conceptually related
constructs [81]. To address potential common method bias, procedural remedies
were implemented at the design stage, including respondent anonymity and scale
separation. Statistical tests were also conducted to assess the extent of
method variance.
Structural Model Evaluation
The
structural model was evaluated by examining path coefficients, significance
levels obtained through bootstrapping, and explained variance (R²) of
endogenous constructs. Effect sizes (f²) were calculated to assess the
substantive impact of exogenous variables on endogenous outcomes [82] (Table
6). Presents analytical Procedures and validation criteria.
Comparative study design
To
enhance the robustness and generalizability of the findings, the study
incorporates a comparative analysis across different professional contexts.
Comparative designs are particularly valuable in auditing research, where
institutional environments, organizational cultures, and levels of
technological maturity may influence judgment processes [83]. The comparative
analysis examines whether the relationships among algorithmic reliance, ethical
fragility, and professional judgment quality differ across subgroups defined by
organizational or contextual characteristics. Such comparisons allow for the
identification of boundary conditions un-der which algorithm-driven auditing
may have stronger or weaker ethical effects. Measurement invariance across
groups was assessed prior to conducting group comparisons to ensure that
constructs were interpreted consistently across contexts. Differences in path
coefficients were then evaluated using multi-group analysis techniques within
the PLS-SEM framework. This comparative approach strengthens the study’s
contribution by demonstrating that the pro-posed framework is not
context-specific but applicable across diverse audit environments, thereby
enhancing both internal and external validity.
Descriptive statistics and
preliminary diagnostics
The
empirical analysis begins with an examination of descriptive statistics and
preliminary diagnostics to assess data suitability for multivariate analysis.
Prior to hypothesis testing, the data were screened for missing values,
outliers, and distributional properties. Missing data were minimal and handled
using established procedures appropriate for structural equation modeling,
ensuring that parameter estimates were not biased [84,85]. Descriptive
statistics indicate sufficient variability across all key constructs,
suggesting that respondents meaningfully differentiated among levels of
algorithmic reliance, ethical fragility, and professional judgment quality.
Skewness and kurtosis values fell within acceptable ranges for PLS-SEM
applications, supporting the robustness of subsequent analyses [86]. Potential
common method bias was assessed given the self-reported nature of the data.
Consistent with best practices, both procedural and statistical remedies were
applied. Procedurally, anonymity and scale separation were employed.
Statistically, variance inflation factors and correlation diagnostics did not
indicate severe method bias concerns [87]. These results suggest that common
method variance is unlikely to materially distort the structural relationships.
Measurement model results
The
measurement model was evaluated prior to assessing the structural model,
following the recommended two-step analytical approach. Internal consistency
reliability was assessed using Cronbach’s alpha and composite reliability (CR).
All constructs exceeded the recommended threshold of 0.70, indicating
satisfactory reliability. Convergent validity was examined through average
variance extracted (AVE). All constructs demonstrated AVE values above 0.50,
confirming that indicators adequately captured their intend-ed latent
constructs. These results support the adequacy of the measurement model in
capturing algorithmic reliance, ethical fragility, and professional judgment
quality. Discriminant validity was assessed using both the Fornell–Larcker
criterion and the heterotraitmonotrait (HTMT) ratio. HTMT values were below the
conservative threshold of 0.85, indicating clear empirical distinction among
the constructs [88]. Collectively, these findings confirm that the measurement
model exhibits acceptable reliability and validity, permitting meaningful
interpretation of the structural relationships.
Structural model results: direct
effects
Following confirmation of measurement model adequacy, the structural model was evaluated to test the direct hypotheses. Path coefficients were estimated using bootstrapping procedures with a large number of resamples to obtain robust significance levels [89,90]. The direct effect of algorithmic reliance on professional judgment quality (H1) was negative and statistically significant. This finding suggests that higher reliance on algorithm-driven auditing tools is associated with lower judgment quality when ethical and cognitive safeguards are not explicitly embedded in audit processes. The result aligns with prior evidence indicating that automation bias and over-reliance on decision aids may impair professional judgment [91,92]. The direct effect of algorithmic reliance on ethical fragility (H2) was positive and significant, providing empirical support for the argument that increased dependence on algorithmic systems heightens auditors’ susceptibility to ethical weakening. This result is consistent with behavioral ethics research emphasizing that authority cues and system trust can diminish moral awareness and per-sonal accountability. The direct relationship between ethical fragility and professional judgment quality (H3) was negative and statistically significant. Auditors exhibiting higher levels of ethical fragility demonstrated lower judgment quality, confirming that ethical vulnerability constitutes a critical risk factor in algorithm-mediated decision environments. This finding supports theoretical assertions that ethical sensitivity is integral to sound professional judgment [93]. Effect size analyses (f²) indicate that ethical fragility exerts a substantively meaningful impact on judgment quality, beyond mere statistical significance. The explained variance (R²) values suggest that the model accounts for a substantial proportion of variance in professional judgment quality, supporting the model’s explanatory power [94,95] (Table 7). Summarizes measurement and structural model.
Mediation analysis: the role of
ethical fragility
To
test the mediating role of ethical fragility (H4), mediation analysis was
conducted using boot-strapping procedures within the PLS-SEM framework.
Bootstrapping provides a robust, non-parametric method for assessing indirect
effects and is recommended over traditional causal- steps approaches,
particularly in complex models. The indirect effect of algorithmic reliance on
professional judgment quality through ethical fragility was positive in
magnitude and statistically significant, while the direct effect remained
significant but attenuated when the mediator was included. This pattern
indicates partial mediation, suggesting that ethical fragility explains a
substantial portion of the adverse impact of algorithmic reliance on judgment
quality but does not fully account for it [96,97]. The significance of the
indirect effect was further confirmed using bias-corrected confidence
intervals, which did not include zero. These results provide strong empirical
support for the proposed behavioral–ethical mechanism through which
algorithm-driven auditing affects professional judgment. Consistent with
mediation theory, ethical fragility functions as an intervening process
translating technological reliance into ethical and cognitive consequences
[98,99]. From a substantive perspective, the mediation findings suggest that
algorithmic tools do not im-pair judgment quality solely because of their
technical characteristics. Rather, the impairment arises when reliance on such
tools weakens auditors’ ethical engagement and moral awareness. This insight
aligns with recent calls to move beyond purely technical evaluations of audit
technologies and to explicitly consider their behavioral and ethical effects
[100,101].
Predictive Power and Model
Robustness
Beyond
hypothesis testing, the model’s predictive power was assessed using
out-of-sample pre-diction metrics. Predictive relevance (Q²) values for
endogenous constructs were positive, indicating that the model exhibits
meaningful predictive capability. Additionally, effect size estimates and
cross-validated prediction errors support the robustness of the structural
relation-ships. Robustness checks were conducted to ensure that the findings
were not sensitive to alternative model specifications or estimation
procedures. Results remained stable across different bootstrap-ping settings
and when controlling for potential confounding variables, consistent with
methodological recommendations for empirical auditing research.
The
comparative analysis examined whether the structural relationships differed
across professional subgroups characterized by varying levels of exposure to
algorithm-driven auditing. Multi-group analysis revealed that the negative
effect of algorithmic reliance on professional judgment quality was
significantly stronger in contexts characterized by higher automation
intensity. Similarly, the mediating effect of ethical fragility was more
pronounced in these environments. These findings suggest that the ethical risks
associated with algorithmic reliance are not uniform across contexts. Instead,
they are contingent on the degree to which auditing tasks are automated and on
the extent to which professional judgment is displaced by system-generated
recommendations. This pattern is consistent with prior research emphasizing
contextual heterogeneity in technology-enabled decision-making [102] (Table 8).
Summarizes mediation predictive power and comparative results.
Summary of empirical findings
Collectively,
the empirical results provide consistent support for the proposed framework.
Algorithmic reliance is shown to adversely affect professional judgment quality
both directly and indirectly through ethical fragility. Ethical fragility
emerges as a critical behavioral–ethical mechanism that explains why
technologically advanced audit environments may inadvertently undermine
judgment quality. The findings also demonstrate that these effects are
context-dependent, with stronger adverse consequences observed in highly
automated audit settings. By integrating mediation analysis, predictive
assessment, and comparative evaluation, this chapter provides a comprehensive
empirical foundation for the subsequent discussion of theoretical, practical,
and regulatory implications.
Discussion of the findings in
relation to prior literature
The
empirical results presented in Chapter 5 provide strong and internally
consistent evidence that algorithm-driven auditing reshapes auditors’
professional judgment through mechanisms that are simultaneously cognitive,
ethical, and institutional in nature. Consistent with foundational auditing
literature, the findings confirm that the introduction of advanced technologies
does not automatically enhance audit quality and may, under certain conditions,
impair professional judgment. The statistically significant negative
relationship between algorithmic reliance and professional judgment quality
aligns with earlier research on automation bias and excessive reliance on
decision aids in auditing and other professional domains. Prior studies have
shown that when decision aids are perceived as authoritative or objectively
superior, professionals tend to reduce critical evaluation and professional
skepticism. The present findings reinforce this argument but extend it by
demonstrating that judgment impairment persists even after controlling for task
complexity and informational richness. More importantly, this study advances
the literature by identifying ethical fragility as a central explanatory
mechanism. While earlier research has largely attributed judgment deterioration
to cognitive overload or reduced effort [103,104], the present findings reveal
that ethical weakening plays an equally—if not more—important role. This
insight is consistent with behavioral ethics research showing that ethical
failures in professional contexts are often unintentional and arise from
situational structures rather than deliberate misconduct [105].
The
positive association between algorithmic reliance and ethical fragility
directly supports arguments advanced in the ethics and technology literature.
Scholars have repeatedly warned that algorithmic systems may reframe moral
decisions as technical problems, thereby reducing moral aware-ness and
diffusing responsibility [106,107]. In audit contexts, where profession-al
judgment carries public-interest implications, such reframing is particularly
problematic. The findings empirically validate these concerns by demonstrating
that greater dependence on algorithmic tools is associated with higher levels
of ethical vulnerability among auditors. The mediating role of ethical
fragility further differentiates this study from prior audit analytics
research. While regulatory and professional bodies acknowledge ethical risks
associated with technology, these risks are often discussed normatively rather
than modeled empirically [108]. By positioning ethical fragility as an
endogenous mediator, this study provides a behaviorally grounded explanation of
why technologically advanced audits may still fail to deliver high-quality
professional judgment.
Discussion of the findings in
relation to theory
From
an institutional theory perspective, the findings can be interpreted as an
unintended consequence of technology-driven institutional conformity. Audit
firms operate under strong coercive and mimetic pressures to adopt advanced
analytics and algorithmic systems in order to signal competence, efficiency,
and regulatory compliance [109,110]. While such adoption enhances procedural
legitimacy, it may simultaneously weaken substantive ethical engagement,
resulting in a decoupling between formal audit processes and professional
values. Professionalism theory further illuminates the observed dynamics.
Classical conceptions of professionalism emphasize discretionary judgment,
ethical responsibility, and moral autonomy as defining features of professional
work [111,112]. The findings suggest that algorithm driven auditing subtly
erodes these features by redistributing judgment authority from auditors to
technological systems. This redistribution does not eliminate professional
responsibility formally, but it weakens ethical ownership of decisions in
practice [113,114]. Behavioral decision theory provides additional explanatory
depth. Dual-process models of cognition posit that ethical judgment requires
deliberate, reflective processing, which is easily bypassed in environments
characterized by pre-structured choices and system-generated recommendations.
The empirical evidence indicates that algorithmic reliance shifts auditors
toward heuristic, system-dependent processing, thereby increasing ethical
fragility and reducing judgment robustness. Finally, legitimacy theory offers a
broader societal interpretation. Auditing derives legitimacy not merely from
technical accuracy but from the perception of independent, ethically grounded
professional judgment [115,116]. The findings suggest a growing tension between
technological legitimacy and ethical legitimacy: while algorithmic auditing may
enhance the former, it risks undermining the latter if ethical fragility is
left unaddressed (Table 9). Presents empirical findings and theoretical
interpretation.
Discussion of hypotheses validity
The
empirical findings provide strong and coherent support for all hypotheses
developed in Chapter 3. Support for H1 confirms that algorithmic reliance
exerts a statistically significant negative effect on professional judgment
quality. This result reinforces prior evidence that advanced audit technologies
may weaken professional skepticism when used as judgment substitutes rather
than decision aids.
Support
for H2 demonstrates that algorithmic reliance significantly increases ethical
fragility. This finding empirically validates long-standing theoretical
arguments regarding ethical fading and moral distancing in structured decision
environments. Importantly, it shows that ethical vulnerability is not an
individual trait but a situational outcome shaped by technological design and
organizational context.
Support
for H3 confirms that ethical fragility undermines professional judgment
quality. This result directly challenges any separation between technical
competence and ethical competence, demonstrating that ethical sensitivity is a
constitutive element of professional expertise [117].
Finally,
support for H4 establishes ethical fragility as a partial mediator between
algorithmic reliance and judgment quality, consistent with mediation theory.
The partial mediation indicates that while ethical fragility is a dominant
mechanism, additional cognitive and organizational factors may also contribute
to judgment outcomes.
Discussion of comparative results
The
comparative analysis conducted in Chapter 5 reveals that the strength and
nature of the relationships identified in the structural model are not uniform
across audit contexts. Specifically, the negative impact of algorithmic
reliance on professional judgment quality, as well as the mediating role of
ethical fragility, are significantly stronger in highly automated audit
environments. This finding provides important boundary conditions for
interpreting the main results and reinforces the argument that ethical risks
associated with algorithm-driven auditing are context-dependent rather than
universal. From an institutional perspective, this pattern is consistent with
research emphasizing that structural intensity amplifies behavioral
consequences [118,119]. In highly automated environments, algorithmic systems
are deeply embedded in audit workflows, reducing opportunities for
discretionary judgment and increasing auditors’ dependence on system outputs.
As a result, ethical fragility becomes more pronounced due to heightened
authority cues, reduced moral agency, and diffusion of responsibility [120].
Conversely,
in less automated audit settings, algorithmic tools function more clearly as
decision aids rather than decision substitutes. Auditors retain greater
interpretive flexibility and are more likely to engage in reflective ethical
reasoning, thereby mitigating the adverse effects of algorithmic reliance. This
distinction aligns with prior research suggesting that technology does not
determine out-comes in isolation but interacts with organizational design,
professional norms, and governance structures [121,122]. These comparative
findings underscore that ethical fragility is not an inevitable consequence of
technological adoption. Instead, it emerges when algorithmic systems are
deployed in ways that displace professional judgment rather than support it.
This insight has direct implications for audit firm governance and regulatory
oversight, as it highlights the importance of contextual safeguards in managing
ethical risks.
Implications
Theoretical implications
This
study makes several substantive theoretical contributions. First, it advances
auditing research by explicitly integrating ethics into models of
algorithm-driven professional judgment. Prior literature has largely treated
ethics as a background condition or normative concern. By contrast, this study
conceptualizes ethical fragility as a measurable, behaviorally grounded
construct that operates as a central mechanism linking technology to judgment
outcomes. Second, the findings extend professionalism theory by demonstrating
how algorithm-driven environments reshape the ethical foundations of
professional work. While prior studies emphasize external threats to
professional autonomy, such as commercialization and regulatory pressure, this
study shows that autonomy may also be eroded internally through technologically
mediated decision architectures. Third, the study contributes to institutional
theory by highlighting a tension between procedural legitimacy and substantive
ethical legitimacy. Algorithm-driven auditing enhances formal compliance and
efficiency but may weaken ethical engagement at the individual level, creating
a legitimacy imbalance with long-term consequences for the profession.
Professional and regulatory
implications
From
a professional standpoint, the findings suggest that audit firms must
reconsider how algorithmic tools are integrated into audit practice. Treating
algorithms as neutral technical instruments overlooks their ethical and
behavioral effects. Audit firms should therefore embed ethical governance
mechanisms within algorithm-driven workflows, including mandatory judgment
review points and explicit documentation of ethical considerations when relying
on system outputs [123]. For regulators and standard setters, the results
indicate a need to move beyond purely technical guidance on audit technology.
Existing standards emphasize data analytics and continuous auditing but provide
limited direction on managing ethical risks associated with algorithmic
reliance. Updating ethics codes and quality management standards to explicitly
ad-dress algorithm-assisted judgment would help clarify accountability and
reinforce auditors’ moral responsibility.
Social implications
At
the societal level, the findings raise concerns about public trust in the
auditing profession. Auditing’s social license is grounded in the belief that
auditors exercise independent, ethically grounded judgment in the public
interest. If algorithm-driven auditing weakens ethical engagement, this trust
may be eroded even when audits appear technically rigorous. Addressing ethical
fragility is therefore essential for sustaining the profession’s legitimacy in
the digital era [124].
Based
on the empirical findings and their theoretical interpretation, this study
proposes the following recommendations.
First,
audit firms should institutionalize ethical checkpoints within algorithm-driven
audit processes. These checkpoints should require auditors to explicitly assess
the ethical implications of system-generated recommendations and to document
the rationale for accepting or overriding algorithmic outputs. Such practices
would counteract automation bias and reinforce ethical ownership of judgments.
Second,
professional education and continuous training programs should integrate ethics
and technology rather than treating them as separate domains. Training should
focus not only on how to use algorithmic tools but also on how such tools
reshape judgment authority, moral agency, and professional responsibility
[125].
Third,
regulators and standard setters should revise ethics codes and quality
management standards to explicitly address algorithm-assisted judgment. Clear
guidance on accountability, documentation, and ethical responsibility in
algorithm-driven audits would reduce ambiguity and strengthen professional
discipline.
Fourth,
audit oversight bodies should develop inspection and review procedures
specifically tailored to algorithm-driven engagements. Such procedures should
assess not only technical compliance but also whether ethical considerations
are meaningfully integrated into judgment processes [126].
Finally,
future research should examine additional moderators of ethical fragility, such
as organizational culture, leadership tone, and individual moral identity.
Longitudinal and qualitative studies could further illuminate how ethical
fragility evolves over time in digitally intensive audit environments [127]
(Table 10). presents implications and Recommendations Framework.
Overall conclusions
This
study provides comprehensive empirical and theoretical evidence that
algorithm-driven auditing reshapes auditors’ professional judgment through
intertwined cognitive and ethical mechanisms. The results demonstrate that
reliance on algorithmic tools does not automatically enhance audit judgment
quality and may, under certain conditions, impair it. This finding directly
challenges technologically deterministic assumptions in contemporary audit
analytics literature and reinforces earlier concerns regarding automation bias
and diminished professional skepticism. More importantly, the study establishes
ethical fragility as a central mechanism through which algorithmic reliance
influences judgment quality. Rather than viewing ethical issues as peripheral
or normative considerations, the findings confirm that ethical sensitivity is
structurally embedded with-in professional judgment processes. This conclusion
aligns with behavioral ethics research emphasizing that ethical failures often
arise unintentionally due to situational and organizational factors rather than
deliberate misconduct.
Key theoretical contributions
The
study makes several significant theoretical contributions to auditing and
accounting research. First, it introduces and empirically validates ethical
fragility as a behavioral–cognitive construct that mediates the relationship
between technology and professional judgment. This contribution advances prior
audit technology research, which has largely examined cognitive effects while
under-theorizing ethical mechanisms. Second, by integrating behavioral decision
theory, professionalism theory, and institutional theory, the study provides a
unified explanatory framework for understanding algorithm-driven auditing. The
findings demonstrate that algorithmic systems may simultaneously enhance
procedural legitimacy and undermine ethical legitimacy, creating a tension that
has not been sufficiently theorized in prior literature. This theoretical
integration responds to calls for deeper conceptualization of professional
judgment in digitally intensive environments [128-131].
Practical
and regulatory implications
From
a practical perspective, the findings have important implications for audit
firms, regulators, and standard setters. Audit firms should recognize that
algorithmic tools are not ethically neutral and that their deployment requires
explicit ethical governance mechanisms. Ethical fragility should be treated as
a measurable professional risk and addressed through training, judgment review
processes, and accountability frameworks. For regulators and standard setters,
the study highlights the need to update ethical codes and audit quality
standards to explicitly address algorithm-assisted judgment. Existing guidance
often emphasizes technical compliance while providing limited direction on
ethical accountability in algorithm-driven audits. Clarifying auditors’
responsibilities when relying on algorithmic outputs is essential for
maintaining professional integrity and public trust.
Limitations
and future research directions
Despite
its contributions, the study is subject to several limitations that suggest
avenues for future research. First, the empirical analysis relies on
cross-sectional data, which limits the ability to capture the dynamic evolution
of ethical fragility over time. Longitudinal studies could provide deeper
insight into how repeated exposure to algorithm-driven auditing affects ethical
judgment and professional identity. Second, future research could examine
additional moderators and boundary conditions, such as organizational culture,
leadership tone, and individual moral identity, to better understand when
ethical fragility is most likely to emerge. Experimental and qualitative
approaches may also enrich understanding of how auditors interpret and negotiate
ethical tensions in algorithm-driven environments. Finally, comparative studies
across regulatory regimes would enhance the generalizability of the findings and
inform international standard-setting debates.
The
author declares that there is no conflict of interest regarding the publication
of this paper. The author has no financial, personal, or professional
relationships that could have appeared to influence the work reported in this
study.