Creating ‘Sustainability-Intelligent’ companies (post 3): data for reporting
So in our last blog post we listed 6 best practices regarding how to leverage data for Sustainability reporting. Last time we went deep into best practices (BP) 1, 2 and 3. And here we develop the other three.
Best Practice 4: Avoid Manual Data Entry As Much as Possible
The issue
Lots of software out there and even amazing industry initiatives are actually being fed via manual data entry, possibly through templated spreadsheets or emails, which inherently creates quality questions given the potential for human error and the lack of auditability. It’s always a journey, so no shame in starting there.
Best practice
But you definitely want to have a roadmap to start moving your key data points, and the key ones from your suppliers, to automated scripts and ultimately well governerned fully automated data feeds with quality checks embedded in them.
The Benefits
This would also help solve the other major pain points in sustainability reporting: the high level of manual effort and the resources needed. Automation also becomes more and more crucial as you aim for higher data resolution and frequency.
Just to be clear though, as I’ve come across some healthy skepticism stemming from flawed automation. This can only be successful if paired up with data literacy (see BP2), and with some human sanity checks, and built-in features that help users track back specific assumptions. Otherwise you run the risk of having a black box system, sometimes contradicting expert-led estimates, with no one at hand to track down mistakes. So, like everything in data, there is no silver bullet. But done well automation is still the right direction.
‘Brain Puzzle’ 5: Supplier data headache: here I will keep the same initials but break from the best practice framework because my take is that this space is still quite immature. So we’ll talk of a brain puzzle, not a best practice, and try to outline an approach.
The need for high quality data at scale across a supply chain
At a high level, what matters here is that the source data be assured, automated and high enough quality, and then that the sharing of data meets all parties’ requirements.
On data quality, there is no shortcut. Companies need to embark their supply chain not just on the decarbonation journey, but also on the data quality journey. If everyone starts to follow the principles above, the whole ecosystem matures, data quality (and automation) improves across the supply chain. Initiatives and alliances that pool data across sectors should start including measures and incentives for quality in their models.
Common pain points in sharing data
When it comes to sharing there are two common pain points: the many heterogeneous data asks that weigh on suppliers, and the sensitive nature of the data being shared. The first problem can be solved via more granular standards (like PACT) and/or via the right structuration of data to be able to extract all required data points, potentially complemented by an AI interface to fill in questionnaires. Humans could then review and edit instead of having to do it all. The sensitivity issue is harder, but has analogues and solutions that are used in other settings. Most prominently, the data clean rooms used in marketing that enable data processing across databases without having to physically ‘pool’ the data together, or the trust frameworks as developed by the Open Data Institute and Icebreaker One. We can’t go into detail here, but will definitely dive deeper in subsequent posts.
Our recommendations
So in terms of recommendations in this space I would say:
Consider incorporating data Service Level Agreements (SLAs) into supplier contracts, basically mandating increasing levels of data access over time. Dimensions to consider include: granularity, frequency and automation of the data feed. Even with a templated spreadsheet via email you can automate processing. This practice is common in other areas (e.g. with marketing agencies), ensures access to high-quality data and also promotes accountability among suppliers. This may require a longer contract to justify the supplier’s investment in this.
Consider leveraging industry-level initiatives, pushing your suppliers to join them, or leading one
Ecosystem initiatives like Perseus, or open data projects like the North Sea Transition one can bring huge value. Perseus has worked with electricity providers to make electricity footprint data available to every SME at a very granular level
Initiatives like Manufacture 2030 are pooling data from common suppliers to then be used by their clients
PACT is defining more detailed data standards to help align methodology
Consider potential tech solutions for direct integrations with strategic suppliers
Concepts like data clean rooms (mentioned above) can offer innovative solutions. This AWS article gives an example of how this can be used to aggregate data.
BP6 : Aim high but get there step by step, and bring your senior team and organization along with you. Data is always a (somewhat arduous) journey, with inflection points and acceleration once the people you need to involve start to ‘get it’. So cultural work is critical, to get the buy-in and keep your key stakeholders on board. Work on that as much as you work on technical roadmaps and capabilities. And then try to chew one big problem at a time, for example, get one large supplier right first while other conversations progress at a slower pace. Then you have a template for all to follow.
And stay positive along the way: the ecosystem will mature and make today’s pain points much smoother.
Until next time
With that, we wrap up our first foray into ‘data for Sustainability reporting’. Stay tuned as we delve deeper into using data to drive faster action in the upcoming posts. If these topics resonate with you or if you have questions, we welcome your comments and engagement. Together, we can accelerate progress toward a more sustainable future.