Live Mint, September 30, 2020
By Pradeep S. Mehta
Our charts that rank states on the ease of doing business are not helping them pursue real reforms
In his classic book, The Acquisitive Society, written a century ago, the Indian-born English economic historian R.H. Tawney criticized modern capitalist and industrial societies for promoting excessive economic self-interest. He cautioned nations against moving with the energetic futility of a squirrel in a revolving cage, and advised recourse to moral principles.
Tawney’s works are relevant today in more ways than one. In recent years, nations have been obsessed with ease-of-doing-business rankings released by the World Bank. India is no exception. In fact, we pride ourselves in taking this exercise to the sub-national level, through the Business Reform Action Plan (BRAP), under which states are ranked on business reforms they claim to have implemented.
This over-emphasis on quantification and a mindless quest to fit performance into predefined categories has several unintended consequences. Climbing charts has become an end in itself and the stated objective of making life easier for entrepreneurs appears to have taken a back seat.
For instance, the Department for Promotion of Industry and Internal Trade (DPIIT) methodology note for the BRAP rankings provides that state governments need not offer new evidence on indicators for which proof was given and approved in the previous round. If such is the case, how is one to judge if reforms have been sustained? Many measures relate to digitizing processes and reducing the human interface. However, a random check of weblinks (as listed on the BRAP portal) that claim to host relevant information and have a seamless user interface shows that several of them are broken.
The methodology note also provides that states/Union territories will be classified on the sole basis of a feedback score displayed on BRAP portal. For general points, it says, industrial estates will be identified and feedback from industrial undertakings in such estates will be obtained. This process of identification of respondents runs the risk of excluding micro and small enterprises, which bear a disproportionate burden of regulatory compliances. The note also states that for user-specific points, user lists are to be sent by states to the DPIIT. Does this not raise conflict-of-interest concerns?
Moreover, it appears that these questionnaires are likely to assess if respondents have “felt” specific reforms, with answers to be either affirmative or negative. Does this obsession with yes-or-no binaries not prevent a better understanding of the actual challenges faced by entrepreneurs that would help design true bottom-up reforms? A cut-off of 70% responses in favour of “reforms being felt” has been arbitrarily put in place to offer full marks on an indicator, without any attempt to examine the concerns of the rest. While the BRAP rankings for 2018-19 were released recently, the portal still does not display its feedback scorecard, saying that it will be updated once the feedback exercise is concluded. If it hasn’t been completed yet, on what basis were BRAP scores calculated?
The methodology note also provides that feedback from stakeholders will be sought only on indicators that have been implemented and evidence is approved by the DPIIT. No feedback will be solicited on other indicators. This means that there will be no deep-dive on indicators that state governments find challenging to implement. No effort will be made to understand the issues faced, it seems, or the impact on enterprises of such an absence of reforms. Shouldn’t the whole point of the exercise be to support states in adopting reforms they find difficult to do? Instead, such issues are ignored. Highlighting success stories while ignoring areas with scope for improvement seems not just pointless, it imposes significant opportunity costs in terms of the government’s time and money.
Moreover, it is not clear if any feedback is obtained from relevant stakeholders on the design of indicators for BRAP scores, and on their addition or removal from the list. For instance, to prepare the 2018-19 list, 97 indicators were deleted and 25 added to the 2017-18. Also, different weights have been given to indicators, presumably without any stakeholder feedback on priorities or the challenges faced. How can lists with different indicators be compared and the progress made by states adjudged?
Such issues should raise serious concerns over the BRAP methodology, whether the feedback is truly inclusive, and if its scores are really representative. The World Bank has recently paused the publication of its doing-business reports, and is conducting a systematic review of data changes, an audit of data collection and review processes, and an appraisal of data security. Closer home, we need a comprehensive assessment of BRAP rankings to examine not only the methodology, but also their objectives and impact.
More importantly, there is a need for a conceptual examination of the rationale of such rankings themselves. Philosopher Michael J. Sandel, in his recent book The Tyranny of Merit, highlights how we live in an age of winners and losers, with the odds heavily stacked in favour of the already-fortunate. He argues that stalled social mobility and entrenched inequality fly in the face of all the rhetoric about rising—that those who work hard and play by the rules will rise as far as their effort and talent will take them. This could be equally true for businesses. Should we not focus on real issues, then, rather than submitting to the tyranny of doing-business rankings?
Pradeep S. Mehta is secretary general, CUTS International