AI AT THE PORT: BALANCING EFFICIENCY WITH ACCOUNTABILITY IN GHANA’S CUSTOMS VALUATION SYSTEM
INTRODUCTION
The Ministry of Finance's
introduction of an AI-driven solution—Publican—for customs valuation marks a significant
step in Ghana's journey toward digital transformation in public service
delivery. By leveraging artificial intelligence to determine Harmonised
System (HS) codes and import values, the initiative promises improved
efficiency, consistency, and revenue assurance at the ports.
While the potential benefits are
evident, the deployment of AI in such a sensitive and consequential domain
raises important governance, legal, and ethical considerations,
particularly in how this ambition is being operationalised. In the context of
longstanding concerns around discretionary practices and corruption in
customs processes, the introduction of an AI-driven solution is likely to
generate significant optimism as a tool for enhancing transparency and
control.
However, the deployment of AI in
public service delivery does not simply resolve one problem; it has the
potential to create new ones. In particular, AI systems designed to
reduce human discretion may, if not properly governed, introduce risks relating
to human rights, dignity, fairness, and accountability. The challenge,
therefore, is not only to improve efficiency, but to ensure that such
improvements do not come at the expense of the very principles public
institutions are meant to protect.
The purpose of this article is to
examine these implications within the context of high-impact public
decision-making. It argues that while AI offers clear efficiency gains, its
deployment in public service delivery raises critical governance, legal, and
ethical concerns—including risks of over-reliance, accountability gaps,
and data governance challenges—and must therefore be accompanied by a deliberate
framework that ensures fairness, transparency, accountability, equity,
and public trust.
This requires a multi-stakeholder,
participatory approach to AI deployment—one that considers not only
technical performance, but also legal, ethical, and societal risks. Such
an approach must embed principles of security by design and privacy
by design, ensuring that systems are developed and deployed in a manner
that protects the public from harm while delivering intended benefits.
At its core, this is not a
debate about technology. It is a question of governance—how decisions are made,
who is responsible, and how the public is protected when systems fail.
THE PROMISE: WHY AI IN CUSTOMS MATTERS
The use of AI in customs administration offers
several compelling advantages. It can:
·
Reduce processing time and administrative
bottlenecks
·
Promote consistency in valuation
decisions
·
Minimise opportunities for discretionary
abuse
·
Enhance revenue mobilisation through data-driven
insights
It is also important to
acknowledge that concerns around discretionary practices and corruption within
customs processes have long been part of the broader reform context. The
introduction of an AI-driven solution can therefore be understood, in
part, as an attempt to reduce human discretion and strengthen transparency in
valuation outcomes.
In a context where efficiency,
transparency, and revenue assurance are longstanding policy objectives, the
introduction of AI is both timely and forward-looking. However,
these very advantages—particularly the consistency and perceived authority
of AI-generated outputs—raise important questions about how such systems
are relied upon and governed in practice.
GOVERNANCE, ACCOUNTABILITY AND THE RISK OF OVER-RELIANCE
A central concern with the
Publican system is the emerging over-reliance and overconfidence in AI
outputs, a well-documented ethical challenge in AI deployment. While the
system is introduced as an efficiency-enhancing tool, its design trajectory—
and the risk that AI-generated values may eventually be adopted without
meaningful human intervention—raises important governance questions.
AI, by its nature, should
function as a digital adviser that proposes, with the human expert
retaining the authority to dispose. It must remain a decision-support
system, not a decision-maker. Where this distinction is blurred, the risk
is not merely technical—it is fundamentally about accountability.
If an AI-generated valuation
leads to economic harm to a business, the question arises: who bears
responsibility? Is it the system provider, Truedare Ventures, or the
deployer, the Ministry of Finance? The absence of clearly defined provider
and deployer obligations risks creating an accountability gap that could
undermine public trust.
Equally important are questions
surrounding the training data and model integrity. What data has been
used to train the system? How has bias been identified and mitigated? Without
clear answers, there is a real risk that the system may replicate and scale
existing distortions within customs valuation practices.
The issue of algorithmic
explainability is also critical. Traders affected by AI-driven decisions
must be able to understand the basis upon which valuations are made. This
raises broader concerns about the right to information, particularly
where decisions have direct financial implications.
Ultimately, the challenge is not
the use of AI itself. AI holds significant potential to improve efficiency and
consistency in public service delivery. However, where governance, legal, and
ethical considerations are not adequately addressed, the system risks being
perceived as a “black box”, leading to resistance from stakeholders.
Public acceptance of AI in
government will not be determined solely by its benefits, but by the extent to
which it is deployed in a manner that is fair, transparent, accountable, and
equitable. In this regard, AI is not simply a technological tool—it is a
governance challenge.
DATA GOVERNANCE, PRIVACY AND
THE RISK OF FUNCTION CREEP
Beyond questions of
accountability and oversight, the deployment of the Publican system raises
equally important concerns around data governance and privacy.
The system relies on Bills of
Entry (BOE) data, which may include commercially sensitive information and,
in certain instances, personally identifiable data. The introduction of
such data into an AI system requires careful consideration of how the data
is collected, processed, stored, and potentially reused.
At a minimum, the deployment of
the system should be preceded by a Data Protection Impact Assessment (DPIA)
or, more broadly, a Data Governance Impact Assessment. This is necessary
to evaluate risks relating to data misuse, unauthorised access, and
unintended secondary use of information. Without such an assessment, the
system risks introducing governance vulnerabilities that extend beyond
valuation accuracy into the domain of data rights and privacy protection.
A critical question also arises
as to the nature of the AI model being deployed. What model architecture
underpins the system? Is it a proprietary model, or does it rely on external
platforms? More importantly, how is the data being used within that model?
There must be clarity on whether customs data submitted through the system will
be:
- used strictly for valuation purposes, or
- incorporated into broader training datasets by
the provider
If the latter is not explicitly
restricted, there is a real risk that sensitive national and commercial data
could be repurposed beyond its original intent, raising serious concerns about data
sovereignty and control.
This leads to the broader issue
of function creep—where data collected for a specific regulatory purpose
is gradually used for other unintended or unauthorised purposes. In the absence
of clear legal and contractual safeguards, systems such as Publican may evolve
beyond their initial scope, with significant implications for privacy,
fairness, and institutional trust.
The core issue is not simply
whether the system works, but whether it operates within clearly defined and
enforceable boundaries. Without strong data governance, even a technically
accurate system can produce outcomes that are legally questionable and
ethically problematic.
In this regard, transparency must
extend beyond outputs to include data usage, model behaviour, and lifecycle
management. Public trust in AI systems will depend not only on the accuracy
of their decisions, but on the confidence that data is being used
responsibly, proportionately, and for its intended purpose only.
LESSONS FROM GLOBAL AI GOVERNANCE: PUBLICAN AS A
HIGH-RISK PUBLIC DECISION SYSTEM
The relevance of global AI
governance frameworks is not merely in their existence, but in how they
help us understand and position systems such as Publican. The key
question is not whether Ghana is legally bound by these frameworks, but what
they reveal about the nature of the system being deployed and the standards
it ought to meet.
From an international
best-practice perspective, the Publican system would likely be classified
as a high-risk AI system. This is because it directly influences public
decision-making with immediate financial and legal consequences for
businesses, particularly in the determination of customs values and duties.
Systems operating in such contexts are not treated as neutral technical
tools, but as decision-shaping mechanisms requiring structured governance
and oversight.
Frameworks such as the OECD AI
Principles, the EU AI Act, the Hiroshima Process, and the Seoul Declaration
converge on a common position: where AI systems affect rights, obligations,
or economic outcomes, they must be governed through clear accountability
structures, transparency, risk management, and meaningful human oversight.
These frameworks emphasise that AI systems must be lawful, fair,
transparent, robust, and accountable across their lifecycle.
Ultimately, applying these
frameworks to Publican does not constrain innovation. It ensures that
innovation is deployed in a manner that is legitimate, trusted, and
sustainable within a public governance context. Without these governance
and ethical guardrails, the system risks undermining the very public
trust and legitimacy upon which its success depends.
In practice, these principles
translate into distinct but complementary obligations for the Publican
system. The provider, Truedare Ventures, cannot be insulated from
responsibility on the basis that Ghana does not yet have a dedicated AI
regulatory framework. As a matter of minimum international governance
practice, the provider should ensure that the system is built on representative
and reliable training data, subjected to bias testing, supported by technical
documentation, and capable of providing intelligible explanations
for its outputs. It must also be transparent about the system's capabilities,
limitations, and appropriate use context. These are not aspirational
standards—they are increasingly regarded as baseline expectations for
responsible AI deployment.
The deployer, the Ministry of
Finance, bears a corresponding obligation to ensure that the system is used
within a framework that preserves procedural fairness and institutional
accountability. This includes maintaining meaningful human oversight,
ensuring that AI outputs do not become automatically determinative without
review, providing accessible mechanisms for challenge and redress,
and clearly establishing where responsibility lies when decisions influenced
by the system result in harm.
The critical point is that AI
systems do not carry responsibility—institutions do. Where this distinction
is not clearly maintained, governance gaps emerge, and accountability
becomes difficult to enforce.
In this context, the absence of a
domestic AI statute should not be interpreted as a regulatory void or
haven. Rather, it places a greater responsibility on both the provider
and the deployer to adhere to generally accepted international standards
of AI governance. The issue, therefore, is not one of legal compulsion, but
of institutional responsibility and public trust.
THE WAY FORWARD: BUILDING
TRUST THROUGH GOVERNANCE
To ensure that the Publican system achieves its intended
objectives while maintaining public trust, governance must precede
automation. This requires a deliberate set of measures.
1. Maintain Meaningful Human Oversight
AI must remain a decision-support system, with
customs officers empowered to exercise independent and meaningful judgment,
not constrained by system outputs. This requires adequate training in the logic
underpinning the system.
At its core, AI should function as a system that proposes,
with the human expert retaining the authority to review, override, and
decide. The human must remain in control of the system—not the other way
round.
2. Define Clear Accountability Structures
The respective responsibilities of the provider (Truedare
Ventures) and the deployer (Ministry of Finance) must be clearly
defined to ensure that liability is not diffused in the event of harm.
Harm cannot be attributed to the AI system itself. It must
be traceable to a responsible human institution or actor—either the
provider or the deployer—within a clearly established accountability
framework.
3. Ensure Transparency and Explainability
Stakeholders must be able to understand how AI-generated
values are derived. This is essential for both trust and effective dispute
resolution.
AI deployment in public service delivery offers significant
benefits, but without transparency and explainability, it risks
resistance. Where systems are perceived as opaque, public trust
erodes—not because of the technology itself, but because of how it is
introduced and applied.
4. Strengthen Data Governance and Bias Mitigation
The system must be subject to regular audits to
ensure that training data is representative and that biases are
identified and corrected.
The Publican system should initially operate as a parallel
validation system, rather than a fully determinative one. AI-generated
values must be tested and validated by customs officers and valuation
experts. Transition to full deployment should be based on demonstrated
consistency, not assumed accuracy.
Deploying the system without this validation risks
introducing systemic errors at scale, with direct economic
consequences for businesses—particularly in the absence of a transparent
and effective redress mechanism.
5. Protect Procedural Fairness and the Right to Challenge
Affected parties must retain the ability to question and
appeal AI-influenced decisions, supported by accessible and meaningful
explanations.
This is particularly important in situations where
businesses may be required to comply with higher valuations determined by human
officers despite differing AI outputs. Without clear mechanisms for challenge
and review, the system risks functioning as a “black box”,
undermining fairness and legitimacy.
CONCLUSION
The introduction of AI into
Ghana's customs valuation system represents a significant opportunity to
modernise public service delivery. However, efficiency alone cannot be
the measure of success. The real test of this initiative will lie in the strength
of the governance framework that underpins it.
AI is not the problem—its
governance is. If deployed without adequate governance, AI risks undermining
the very objectives it seeks to achieve. When deployed within a framework
that prioritises fairness, transparency, accountability, and equity, AI
can enhance trust and improve outcomes. Without such a framework, even the most
advanced systems risk resistance—not because of what they offer, but
because of how they are implemented.
The absence of a domestic AI
statute should not be interpreted as a regulatory void or haven.
Rather, it places a greater responsibility on both the provider and the
deployer to adhere to generally accepted international standards of AI
governance. The issue, therefore, is not one of legal compulsion,
but of institutional responsibility and public trust.
Ultimately, the effectiveness of
AI in public administration will depend on the governance structures and
change management processes that guide its use. AI is not just a tool of
efficiency—it is a governance system that must be deliberately
designed to protect rights and build trust.
Comments
Post a Comment