Indeed, this potential is fascinating, leading to reduced corruption, higher compliance and wider tax net. Albeit, a crucial question remains unanswered. Who audits the algorithm?
The recent FBR’s 2025 reform roadmap shows that AI and advanced data analytics are no longer experimental tools. They are being institutionalised to recognise risk profiles, flag suspicious returns and prioritise audits across income tax, sales tax and customs operations. The motivation behind it is clear.
Country’s tax-to-GDP ratio has lingered around 1011% for decades, which is far below when compared with the regional peers. Besides, manual systems have failed to bridge the compliance gap, whereas discretion has historically enabled rent-seeking. AI bring new horizon while promising scale, accuracy, speed and consistency where the human system has struggled and almost become obsolete.
International development institutions are broadly supporting this direction. For instance, Asian Development Bank (ADB) reported that AI allows tax authorities to automate routine processes, target audits more precisely and reorganise skilled staff to complex cases, thus improving both efficiency and transparency. In theory, algorithmic implementation reduces arbitrary decision-making and prevent face-to-face interactions that breed corruption.
Yet governance is beyond efficiency. It is also about prevailing legitimacy. Therefore, government should be aware that AI systems are only as reliable as the data, assumptions and incentives rooted within. In Pakistan’s case, the digital infrastructure catering for these systems remains irregular. Outdated registries, inconsistent sectoral reporting, informal economic activity and data quality gaps present substantial risks. An algorithm trained on inadequate or biased data does not merely make mistakes, it compounds them.
Realistically, the introduction of TTS demonstrates both the potential and the peril. FBR plans to extend TTS beyond tobacco, sugar, cement and fertiliser into tiles, textiles and green leaf threshing units, citing tens of billions of rupees in annual tax evasion in these sectors. While digital monitoring can restraint underreporting, it also put vast power in automated surveillance systems. Errors in tagging, faulty sensors or incorrect production benchmarks can trigger penalties with limited human review, and that is where it creates more chaos.
For taxpayers, the problem is not technology, they will learn it; it is ambiguity that causes concern. Few citizens or businesses understand the process i.e. how risk scores are generated? Why certain returns are flagged? Or how automated assessments can be challenged? Appeals mechanisms exist, but they are designed for human decisions. What if an AI model flags a taxpayer, who bears responsibility? And how to respond to it? That’s where the ambiguity lies.
Worldwide, AI-based tax management are coping with this problem. OECD countries progressively stress on “human-in-the-loop” systems, mandatory algorithmic audits and communication standards to make sure that the AI-driven decisions can be interrogated and reversed. Pakistan, by contrast, is implementing these tools faster, which is by far a reactionary approach. However, we must develop laws, regulations and oversight mechanisms first or in parallel to ensure the rightful implementation of AI in governance.
This governance gap does matter. Tax return shows trust between citizen and the state. If AI-based system is perceived as a black hole that fines without clarification, it deepens the resistance. Worse, opaque systems can shift discretion from frontline officers to unaccountable algorithms, replacing one form of arbitrariness with another.
What we need now is a second phase of reform focused on accountability and transparency. I suggest four policy implications for this phase.
First, we need to develop a comprehensive AI assessment framework for revenue institutions by defining and explaining bias testing protocols, accuracy thresholds and independent review mechanisms. Algorithms that influence audits and penalties should be periodically assessed by third-party auditors.
Second, taxpayer rights must be updated and publicly available. Thus, citizens should have a legal right to explanation when AI-based systems generate audits or enforcement actions, along with user friendly accessible appeal mechanism that include human oversight.
Third, transparency must prevail through availability of public dashboards that clearly report relevant disputed cases, error rates and reversed AI-assisted decisions. It reflects institutional confidence and thus builds credibility.
Last but not least, capacity building and digital adoption of the workforce is vital. Tax officials must be competent enough to not only efficiently use AI tools, but to question them too. Blind dependence on AI-based outputs is dangerous.
Pakistan’s revenue challenge is real, and digital transformation is inevitable. But governance cannot be automated. If AI is becoming the state’s new tax collector, then auditing the algorithm is not a technical luxury, but a democratic necessity.