The False Conflict Holding Back Emissions Accounting
And the Need for Harmony of Standards and Collegiate Collaboration (a guest post by Karl H. Richter)
“The fossil fuel industry is not the enemy — it is the emissions,” says Mia Mottley, Prime Minister of Barbados. This provocative truism is at the heart of an inglorious brawl between unlikely foes — emissions accountants themselves. I hope they make amends when they meet later this month in Aspen, Colorado.
Reactions to recent articles in the Wall Street Journal and Financial Times have entrenched some clichéd caricatures. The heroes are the traditional experts behind the Greenhouse Gas (GHG) Protocol, which after nearly 30 years of tireless effort has become the de facto standard for disclosing emissions. Their recent partnership with the International Organization for Standardization (ISO) to harmonize their two standards is being celebrated. The villains are new industry bodies like Carbon Measures and the E-Ledgers Institute that want to introduce an alternative emissions accounting standard. Critics say their goal must be to manufacture doubt because they are in the pockets of the petrochemical industry. Why else would these upstarts challenge existing standards if not to delay decarbonization for their paymasters?
But this narrative of two competing standards is simplistic and wrong. It describes a false choice that comes from inattentional blindness — the failure of experts to recognize important information because it falls outside their usual frame of reference.
As a result, both sides in this war of words are correct and wrong at the same time. They are unable to see the flaws in their own arguments, while also unable to see the benefits of the other perspective. Shouting louder, repeating the false choice, or intensifying only one side of the argument is disingenuous to the merits of the other.
Let me explain.
From a business management perspective, the GHG Protocol is right to amplify emissions hotspots and highlight feedback loops both upstream and downstream. What financial accountants and macroeconomic statisticians see as multiple counting, GHG Protocol correctly frames as an essential way of revealing which companies, industries, and supply chain configurations can have the biggest effect in reducing systemic emissions.
However, that is not the only perspective that matters.
Onerous new regulations — carbon border adjustment mechanisms (CBAM) and the like
Imagine, you are a procurement officer at a European manufacturing company that imports steel. The EU CBAM obliges you to get data from your suppliers in, say India and China, about their emissions, which are embodied in the steel they shipped to you. At face value, this CBAM at a product level seems fair. It would be unfair to require European steel mills to implement expensive environmental regulations while foreign competitors do not have to. The polluting mills overseas would be gifted an unfair price advantage. Whilst this may be true, it can also appear to foreign suppliers that CBAM is a protectionist trade tariff masquerading as environmental policy.
Not surprisingly, India, China, and others are raising concerns at the World Trade Organization (WTO), with some even indicating a willingness to pursue legal challenges. Beyond the usual obfuscations in these bun fights, their core critique of CBAM is fundamentally valid because implementation is ambiguous and does not create a level playing field for global actors.
Now imagine having to adjudicate such litigation at the WTO. Can you imagine siding with a calculation methodology that by design distorts true values because it counts the same emissions several times over in the same supply chain? No, of course not!
It is reasonable for a plaintiff to argue that their emissions should only be counted once, that they have no influence over how their customer uses their product (except fossil fuels and refrigerants because their intended use by customers will produce a fixed amount of emissions, unless countermeasures are implemented), and that the methodological requirements must be specific enough to avoid the other side being free to self-determine their analytical boundaries.
Anticipating this, the European CBAM legislation introduced the “EU method” of calculation, not the GHG Protocol, which avoids the multiple counting and aims to be more prescriptive.
Parallel to CBAM, an equivalent logic applies. I know of companies that have tried to do the right thing by disclosing a truer picture of their emissions according to GHG Protocol guidance, only to lose competitive government contracts because the other side, which also followed GHG Protocol guidance, appeared to have lower emissions due to applying the guidance using less onerous assumptions and implementation options. Companies have a perverse incentive to game the flexibility and freedoms inherent in the GHG Protocol.
The “reasonable endeavors” approach underpinning GHG Protocol guidance starts to break down with these more demanding requirements because it was never designed as procurement law, nor as a prescriptive tax code, nor to withstand the rigorous scrutiny of international trade litigation. This is intensified when the money changing hands depends on the outcome of the methodology and assumptions used.
Government statisticians and central banks
My work with government statisticians reveals related frustrations. They cannot formulate targeted policy recommendations to legislators because the current emissions data are, in their words, “averages of averages” — a statistical error that occurs when all subgroups are treated as equal, even if they contain vastly different amounts of data, or if the data are of varying quality.
The business management logic of the GHG Protocol, which serves companies and investors well because it reveals hotspots and amplifies feedback loops, fails to serve the needs of macroeconomic statisticians.
In the discussion paper by Dr Ulf von Kalckreuth, Principal Economist-Statistician at the Deutsche Bundesbank (the German central bank), he says ‘the link between direct and indirect emissions is not trivial … the measurement of indirect emissions in the GHG Protocol tradition is largely ad hoc and dissociated from the well-established measurement of direct emissions’. He subsequently published research to show how the GHG Protocol can be improved with dependable statistical tools such as input-output (I-O) analysis.
To explain I-O analysis from a layperson perspective, it can be said to use the output of a previous calculation as the input for a subsequent calculation. This process is then repeated recursively. Moreover, I-O analysis has evolved into a powerful statistical tool by including techniques for eliminating the multiple counting of economic activity. This typically occurs when intermediate or partially processed goods are mistakenly added to the final value of finished products.
Von Kalckreuth is not alone. According to Mike Berners-Lee of Small World Consulting, input-output models “solve the problem of incomplete system boundaries” and the “truncation error” of traditional product life cycle assessment.
Does this mean we need a different carbon accounting standard to serve each of these different needs? Again, no.
But we do need to improve the current methodologies so that they meet the requirements of a broader range of stakeholders and what they use emissions data for. It is reasonable to demand a common emissions accounting framework that produces comparable data, which can simultaneously serve the different disclosure and analytical requirements of different parties.
Simply put, this requires data interoperability — which is the ability of different information systems to share data across organizational or technical boundaries. It enables the seamless transmission and integration of data without manual intervention. Small World Consulting uses the term hybridization for “combining data from both P-LCAs [product life cycle assessments] and input-output models, to solve the problem of incomplete system boundaries, while keeping the specificity benefits from P-LCAs.”
As climate change becomes an increasingly concerning issue for more diverse stakeholders, from procurement officers to customs officials and central bankers, the demands on data become more exacting and require more statistical rigor and auditability. So, yes, emissions accounting best practice must mature and improve if it is to remain relevant and withstand legal challenge.
The people I know at GHG Protocol would not disagree.
Lessons from history
Can we achieve interoperability between what seems to be divergent requirements? Yes, this is absolutely possible. Clues are provided by analyzing a previous paradigm shift — when photography transitioned from chemical film processing to digital.
Image source: Author (Richter, KH). (2024). Making carbon accounting count (Lecture materials). Frankfurt School of Finance and Management.
Some companies like Fujifilm did well. They realized that the competitive domain was photography, not a particular image processing methodology or technique. As an ambidextrous organization, they had expertise in digital technologies and were encouraged to experiment. They took advantage of the disruptive innovations in digital image capture and successfully managed a period of technological coexistence over several years through the late 1990s and early 2000s. They hedged their bets to realize a win-win, exploiting market demand for both analogue and digital technologies during the transition.
Others, like Kodak, suffered from inattentional blindness (and perhaps also arrogance). They had centered their business model on analogue chemical technologies and could not — or did not want to — see the benefits of digital technology until it was too late (even though they invented the digital camera in 1975). Kodak went bankrupt.
Achieving interoperability in emissions data
Can we achieve interoperability between ostensibly divergent requirements? Yes.
Readers of the WSJ and FT articles I referenced at the start will be forgiven for thinking that there are competing standards in emissions accounting — the traditional GHG Protocol disclosure requirements and a new E-Ledger approach modeled on financial accounting (and possibly also the EU method of CBAM).
I produced the diagram below to show that this is a false choice. These approaches are — or can be — fully compatible with each other. The diagram is technical to demonstrate standards alignment, so the key messages are extracted below.
The diagram presents a scenario for a company producing 4,000 widgets in a period, of these 3,200 are sold and 800 remain unsold in the warehouse.
Image source: Author (Richter, KH). (2026). Neoni App website (Standards alignment). iSumio / EngagedX Ltd.
https://www.neoni-app.com/#standards-alignment
This diagram shows that with good data system design, it is possible for a single emissions accounting system to produce different types of disclosure data, at three primarily different levels — product, company, and macroeconomic.
This is akin to financial accounting systems that can simultaneously serve different requirements:
· Invoices that itemize products and services for customers.
· Balance sheets, profit and loss statements, and cashflow statements that provide executives with essential business management information.
· Modules for running payroll and sales tax (or value added tax) prepare truncated information for reporting to the authorities.
Similarly, a single emissions accounting system can provide disclosures according to the most demanding requirements of the GHG Protocol for life cycle assessment (LCA), both upstream and downstream. At the same time, it can disaggregate the relevant product-level emissions data for E-Ledgers and pass data to customers for CBAM disclosures, and so on. Importantly, this common system also produces the data required by input-output models for statistically accurate macroeconomic analysis.
Studying the diagram, readers will note that some disclosure values are indeed different (notably between the elements highlighted in orange and green). Debating the pros and cons of each, or whether the corresponding values should be the same or different is for another article. The key point is that all these values, calculated for different purposes and according to different rules, can be produced using the same single emissions accounting system without distorting or corrupting the data that underpins it.
In the scenario depicted above:
· Upstream emissions (according to the GHG Protocol, highlighted in orange) are 40,000 tCO2e and represent the 4,000 widgets manufactured in total this period. 32,000 tCO2e represent the 3,200 widgets sold (according to E-Ledger principles, highlighted in green).
· Each widget has 10 tCO2e of the company’s total emissions this period allocated to it (this is the same according to both GHG Protocol and E-Ledger principles) and can be assigned to products via invoices or digital product passports (DPPs).
· The E-Ledger records 8,000 tCO2e of retained emissions as a separate line item, which represent the 800 unsold widgets remaining in the warehouse. Until sold, these widgets remain in the possession of the manufacturing company and on their balance sheet as inventory (both in terms of financial cost and emissions liability).
· Full life cycle assessment (LCA) values are presented as a range, because downstream emissions from using products are typically uncertain depending on the use-case. The full LCA range is between 15 tCO2e and 30 tCO2e per widget, respectively 60,000 tCO2e and 120,000 tCO2e across all widgets manufactured in the period.
This last point redresses a key criticism of E-Ledgers — that the E-Ledger methodology allows fossil fuel companies, for example, to get off the hook by not disclosing their downstream emissions when customers burn the fuels that are sold to them. But this does not make sense from a data science perspective. As I show above, downstream emissions can easily be accommodated within this interoperable data architecture. Analysts and regulators can have access to all the data they need for a full cradle-to-grave LCA — while the clear delineation of upstream and downstream data adheres to the accounting definition of control, based on principles of recognition and derecognition.
Perhaps the criticism of E-Ledgers is rooted in the language used by the founders of the E-Ledgers Institute, Professors Robert Kaplan and Karthik Ramanna, who are academics in accounting not data science. What I refer to as a “cascade of emissions data” or what von Kalckreuth describes as a “recursive calculation,” Kaplan and Ramanna describe as a “company transfers those emissions to its customers when those outputs are sold, akin to inventory accounting.”From that phraseology, the E-Ledger approach has (understandably) been misconstrued by GHG Protocol traditionalists as “offloading the burden of managing emissions (liabilities) to customers” and that this “limits responsibility” of companies, especially those selling fossil fuels.
But passing on data is not the same as passing on responsibility!
Standards alignment (not fragmentation)
Reacting to concerns of fragmentation in the emissions accounting landscape, Tim Mohin (Steering Committee Member of the GHG Protocol, and Partner and Director at Boston Consulting Group) wrote in his newsletter Let us hope harmony prevails. The most eloquent response to this call might be from Hilary Eastman of Confluence Advisory (a CFA Charterholder and former ESG reporting partner at KPMG and Director at PwC), who says “the ideal system would harness the best of both — E-Ledger principles for accounting and the GHG Protocol standards for disclosure.”
Unfortunately, the feud between the GHG Protocol and the E-Ledger Institute is being stirred up in the media, creating more disharmony. Fortunately, it is nearly entirely due to misunderstandings and combative narratives rather than material differences.
Therefore, it is arguably necessary to explain the pathway for alignment more explicitly (at the risk of repetition). Using the data architecture in the previous diagram, accurate downstream values can be communicated to customers for calculating their own direct emissions. Alternatively, customers can use reference data (using averages and estimates) if specific data is not provided by the supplier. Both upstream and downstream values can be disclosed publicly or to statutory authorities as required. Companies already periodically file returns for sales tax (value added tax) — there is no reason this existing disclosure infrastructure could not be extended to include emissions data. The benefit would be that authorities and regulators can have near to real-time access to real-world emissions data, enabling them to continuously improve the veracity of their analysis as well as the reference values they in turn make available to companies for estimation.
To summarize, the diagram above demonstrates how a single integrated emissions accounting standard can produce data concurrently according to all requirements — the GHG Protocol, E-Ledgers, CBAM, input-output analysis, and any other future expectation that is reasonable. It is possible to provide each stakeholder with the data they require for their respective analytical purposes — at the levels of individual products, companies, or for macroeconomic aggregation — without distorting or corrupting the core data for any other stakeholder.
Dismissing the false choice narrative
It is unclear why Kaplan and Ramanna have not done more to dismiss the public perception that they picked a fight with the GHG Protocol instead of seeking cooperation. Unfortunately, their seemingly combative style is drawing attention for the wrong reasons.
But as Ramanna wrote in his book “The Age of Outrage”, we need to ‘ “turn down the temperature” in the moment, making discussion, analysis, and better decision-making possible.’
Full Disclosure: I was involved with Kaplan and Ramanna in various working group meetings on emissions accounting, and am grateful for their invitation to contribute towards their proto-standard for the E-Liability Method that was initially published in 2024.
I remain an ardent supporter of their technical work — after all, it amplifies what was said several years previously about how to improve the GHG Protocol, whether by me or others like Mike Berners-Lee (arguing for input-output type analysis in his 2009 book “How Bad Are Bananas?: The Carbon Footprint of Everything“). It is mutually validating that independent parties have reached nearly identical conclusions based on their different perspectives.
It is appealing how Kaplan and Ramanna have framed emissions as liabilities and sequestration offsets as assets, and so too is their idea of E-Ledgers. Not least because these terms unlock familiar balance sheet logic and a business language that are easily recognized by financial accountants and executives. This last point is a critical prerequisite for scaling emissions accounting across manufacturing companies at large.
The purpose of this article is straightforward but ambitious — to provide information (with enough detail, hopefully not too much) that can help the opposing factions see beyond their inattentional blindness, to show how alignment is possible, and to encourage collegiate cooperation.
The false choice narrative is demonstrably flawed and counterproductive.
Technical implementation — not all ledgers are the same
Whilst the idea of E-Ledgers is fundamentally drawn from accounting ledgers, Kaplan and Ramanna confusingly advocate for its implementation via an entirely different type of ledger, blockchain — a type of distributed ledger technology (DLT), essentially a specific type of distributed database technology. According to Kaplan, “we couldn’t have done this 10 years ago, but now we have that technology to deploy.”
Except blockchain technology is not necessary, and I explain why below. I have no objection to people using blockchain for emissions accounting if they want to. My point is simply that promoting blockchain in the context of E-Ledgers muddies the water. It creates the impression that E-Ledgers cannot be implemented without blockchain.
The goal of data interoperability is, by definition, agnostic of any specific technology. My personal view is that web protocols offer more interesting potential because they are foundational, truly open, and interoperable. New generation web protocols offer all the benefits of blockchain and DLT, at least insofar as blockchain and DLT are useful for emissions accounting.
Sir Tim Berners-Lee (inventor of the World Wide Web in 1989) has in recent years developed new distributed data protocols for the web, which he calls Solid. These protocols upgrade the web to become much more powerful. They introduce a concept of data PODS (personal/ proprietary online data stores). These data PODS, interconnected via Solid protocols, allow people to work securely with distributed data at scale.
Full Disclosure: I was involved with Berners-Lee in various Solid working groups. My team contributed software code towards his open source initiative to enable data to cascade automatically between PODS. We spoke once briefly and amicably about using Solid for emissions data. He was very supportive and enthusiastic, perhaps because his brother Mike works in the field (as mentioned above).
Image source: Author (Richter, KH). (2022). Carbon Tracker 123 and instansOS (Solid protocols and data PODS). iSumio / EngagedX Ltd.
Between 2019 and 2022, I led the development of several technical prototypes, the latest being Carbon Tracker 123. To support Carbon Tracker 123, and anticipating the need for wider ecosystem interoperability, we developed a separate enabling infrastructure called instansOS, which added a search engine to make emissions data discoverable. instansOS was conceived as a non-profit initiative following open source principles to provide a neutral and common infrastructure for enabling data sharing between various commercial software solutions. Carbon Tracker 123 distinguished between management accounts (private) and disclosures (public).
Image source: Author (Richter, KH). (2022). Carbon Tracker 123 and instansOS (private management accounts and public disclosures). iSumio / EngagedX Ltd.
Carbon Tracker 123 extended the use of PODS from personal data to company data, specifically emissions data. With the enhancements we built, it leveraged the Solid protocols so that emissions data can cascade through supply chains — essentially following the iterative calculation method of input-output analysis advocated by von Kalckreuth in his various technical papers from 2022 to 2025. This data cascade achieves what is generally considered to be the most demanding aspect of the GHG Protocol — that organizations obtain data from their suppliers about their real-world emissions.
We anticipated the need for augmenting real-world data with estimated data to facilitate incremental adoption through industry. This is not unlike how Fujifilm successfully managed the coexistence of two paradigms, analogue and digital imaging technologies.
Image source: Author (Richter, KH). (2022). Carbon Tracker 123 and instansOS (Augmenting real-world data and estimated data). iSumio / EngagedX Ltd.
During this process, we won an innovation competition set by the Scottish Government in the first half of 2021 and were subsequently contracted by them to develop a software solution for use by industry. We were guided by a simple maxim, which we were told clinched the competition with the Scottish Government for us — “your company’s indirect emissions are another company’s direct emissions, just connect the data.”
The resulting Neoni® App was deployed in 2023 with manufacturing sector companies, ingesting real business data from their systems and their suppliers. We demonstrated how the data of suppliers and customers can be interconnected, enabling emissions data to cascade (this time without Solid protocols).
With this deployment, it was more important to address the real-life business concerns of the companies using our system — the humdrum barriers to market-wide adoption. We proved that trade secrets and commercially sensitive information can be protected through careful design of data interoperability requirements by never requiring the sharing of sensitive data in the first place. This avoids the need for complicated cyber security or other whizzbang technologies. It also increases the likelihood of adoption, deftly balancing the contradictory needs of transparency and privacy.
Working with messy and incomplete data
Fortunately, the requirements for emissions data to cascade through supply chains is much less onerous than most people think. To stretch Voltaire’s maxim, we cannot let an obsession for perfect data be the enemy of good decarbonization, not when reality presents us with messy and incomplete data.
Anyone who has attempted emissions accounting knows that real-world data are patchy because it is impossible to get data from all suppliers. It is therefore naïve to campaign for a rigorous ledger-based accounting system if its success depends upon everyone implementing it. Von Kalckreuth is sensitive to this. In his 2025 paper, he wrote “GHG Protocol Standards were developed around the turn of the millennium for a world in which only few and isolated companies decided to give an account of their carbon emissions — voluntarily. At the time, there was no use in pointing to the accounting work of others as a prime source of information.”
Rooted in this practical reality, von Kalckreuth demonstrates how, using input-output analysis, even such an ostensibly patchy combination of real-world data and estimates can converge towards accurate emissions values in aggregate, over time.
It works as follows. You start with real-world data about your organization’s direct emissions and then add estimates for all your indirect emissions, this produces a comprehensive (but approximate) initial output value of your cradle-to-gate emissions. You then communicate this output data to your customer. They use your output data, plus that of their other suppliers, as inputs in calculating their indirect emissions, adding specific real-world data about their own direct emissions. The resulting output is the cradle-to-gate emissions of your customer, which they in turn communicate to their customers, and so on. The proportion of real-word data increases at every iteration of the calculation, diluting the inaccuracies in estimated data over time as data cascade through the system. Not only is this consistent with the GHG Protocol, but it achieves its most demanding requirement — getting data from suppliers.
This is practically relevant for the manufacturers of complex products, like cars or computers. They do not need to obtain emissions data from an unknown supply chain all the way to each material’s origin. In most cases, data from a few tiers in the supply chain is enough — and then, not even from every supplier. More depth is only required from those industry sectors with strong heterogeneity (high emissions variation within the sector). These are industries like agriculture, plastics and rubber products, and outputs from textile mills. In these heterogeneous industries, one supplier may have extremely low emissions whereas their competitor may have extremely high emissions. Von Kalckreuth contrasts this with sectors that have “little heterogeneity, such as service industries with a strong focus on office work,” for which a higher degree of average data about indirect emissions can still support statistically accurate data in aggregate.
Image enhanced by author, to clarify labels on the horizontal axis and the title. Original source: von Kalckreuth, U. (2024). Harnessing the power of input-output analysis for sustainability: A simulation study based on US data (IFC Working Paper No. 24, pg. 17). Bank for International Settlements. https://www.bis.org/ifc/publ/ifcwork24.pdf
Von Kalckreuth’s chart above shows how, for industries with little heterogeneity (like lawyers and accountants) their embedded emissions converge to accurate values quite quickly (these are the lines nearly horizontal at the bottom of the chart). This convergence happens due to the nature of recursive calculations within input-output analysis. Technically speaking, while the rate of convergence is the same for all industries, those industries with high heterogeneity have initial errors that can be devastatingly high (these start near the top-left of the chart).
For industries with little heterogeneity, which are the majority, convergence of values happens quickly because the estimated values are within a small range of the actual emissions. Practically speaking, this means that these industries only need to provide real-world data about their own direct emissions, augmenting this with estimated data for all upstream emissions beyond that. In some situations, or for industries with extremely little homogeneity, it may even be sufficient to use estimated data for all emissions, both direct and indirect — more research is required to validate this assertion.For industries with high heterogeneity in emissions values, it is necessary to obtain real-world data about direct emissions from deeper in the supply chain, to about tier three or four. Estimated data can be used for all other upstream emissions beyond that.
Whilst getting data from tier three or four in a supply is not trivial, this depth is only required in a few industry sectors. For the rest, which is most organizations, the requirement for real-world data is much less burdensome. At least we now have statistical analysis to help us establish the materiality thresholds of data requirements — in other words, the depth of data disclosure required for regulations like CBAM.
One can speculate that in practice, the materiality thresholds for deep supply chain data might be driven by weighting a combination of factors, such as:
· Emissions heterogeneity of a product type (as identified by von Kalckreuth).
· Emissions intensity of that product type (the typical amount of emissions per unit of product, see chart below for Germany).
· Whether the quantity of a supplied item makes up a significant proportion of a company’s total supply volume or not.
Source: von Kalckreuth, U. (2025). Product carbon contents – an encompassing and market based information system. Latin American Journal of Central Banking (Annex 1: Carbon content for product groups, Germany 2018). https://www.sciencedirect.com/science/article/pii/S2666143825000080 (Shared under Creative Commons license BY-NC-ND 4.0)
This work by von Kalckreuth indicates that:
· It is better to begin implementing the input-output approach sooner, even with poorer quality supply chain data, rather than delay until better data about indirect emissions are available. Accuracy is an emergent property — a function of the recursive nature of input-output analysis, which iterates over time through supply chain tiers.
· We can identify which sectors need more real-world data from deeper in their supply chains versus those that do not. Industry and CBAM regulators alike will appreciate more targeted requirements about the number of supply chain tiers that need to be included for producing statistically rigorous data. Regulatory requirements can be guided by materiality principles on a sector-by-sector basis (or more granular at the level of product types if required).
Wider lessons worth sharing
My team and I gleaned important insights from deploying a software solution with manufacturing companies and their supply chains. The most important lessons are often not technical but soft — helping people and organizations embrace the change, not resist it, so that they are comfortable adopting the innovation:
1) It is possible to achieve the most demanding requirements of the GHG Protocol by using a data-science approach to make implementation easier, faster, cheaper, and more accurate. This can achieve a level of auditability akin to financial accounts and the statistical rigor of input-output analysis. There is no false choice. Multiple disclosure formats and parallel analytical objectives can coexist alongside each other, all driven by one common emissions accounting framework and data system.
2) We can start with messy and incomplete data, using estimated data when real-world data are either not available or not required. It is better to begin implementing this input-output type methodology at scale, sooner rather than later, because system-wide accuracy is an emergent property of the recursive calculation when supply chain organizations apply it in cooperation with each other. In plain language — this is a learning system that is fault tolerant and improves the more it is used.
3) Through disciplined design of data sharing requirements, it is possible to protect the trade secrets and commercially sensitive information of organizations. This principle can be extended to protecting national security interests relating to the supply chains of critical industries.
To paraphrase Mia Mottley — emissions are the enemy. Our common goal is to unlock decarbonization as a competitive advantage, with all businesses competing on a level playing field globally. This requires collective action and cooperation amongst industrial supply chains but especially within the emissions accounting profession.
P.S. I know that people from the GHG Protocol, E-Ledgers Institute, Carbon Measures, and others are meeting later this month in Colorado for the Aspen Forum on Carbon Accounting. May the force of collegiate collaboration be with you. I wish you every success!
—
This is a guest contribution by Karl H Richter. He is the Founder and Executive Director of EngagedX, a company dedicated to advancing sustainable finance and impact investing. He leads iSumio, which developed the Neoni App for emissions accounting cited as exemplar by central banks. He lectures on ESG and impact finance as well as emissions accounting at the Frankfurt School of Finance and Management. Previously Karl worked with the UNDP, the European Commission, and OECD.









