MarkLogic 10 and Data Hub 5.0

Latest MarkLogic releases provide a smarter, simpler, and more secure way to integrate data.

Read Blog →


Stay On Top Of Everything MarkLogic

Be the first to know! News, product information, and events delivered straight to your inbox.

Sign Me Up →


Truth in ESG Labels
Posted by Chuck Hollis on 26 April 2022 02:50 PM

How many of us pick up something at the grocery store and read the back of the label? We all want to know what we’re buying: the ingredients, their provenance and so on.

The same is true when buying financial products and services: we want to know what’s inside.

This is especially true when they are sold as ESG, or sustainable from an environmental, social, and governance perspective.

Much as you would vote for sustainable food choices by purchasing them in the grocery store, you can also vote for sustainable investment choices.

But there’s a problem. The definition of ESG – like organic or sustainable food – will always be a moving target. What do you put on the label? How do you prove what you claim?

You would use external agencies to certify the ESG aspects of your product or service. Indeed, there is no shortage of agencies that will sell you their ratings of companies or products using their proprietary ESG criteria.

Buyers will want full transparency and disclosure at all times, viewed through two lenses: as financial investment, and as a social good. The more transparent and understandable you make your investment decisions on behalf of your client, the more trust will be earned.

This need for an externally verifiable chain of trust is not unique to ESG financial products. It shows up in many places, but the complexity and subjectivity of the consumer and their expectations make it particularly difficult.

As an example, what exactly defines “farm to table” at a restaurant?

Let’s Make It More Interesting?

Managing a portfolio of potentially complex investments on behalf of a client has never been a simple task to begin with. Doing so through an ESG lens raises the complexity to an almost mind-boggling level.

Each of many ESG rating agencies has its KPIs – key performance indicators – on a wide variety of ESG measurement points.

There is no single authority, which means the choice of KPI is subjective, as is how they are defined, measured, and communicated.

This collection of subjective KPIs is periodically updated and sold to the ESG investment firm, who must figure out what to do with it.

While there might be good tools and methodologies to evaluate a portfolio from a financial perspective (risk, tax consequences, etc.), there are no such tools to do so from an ESG perspective.

And there are certainly no tools for doing both at the same time.

A given ESG financial analyst might be responsible for interpreting and evaluating over 10,000 complex data elements, provided on an ongoing basis, and making decisions of consequence on behalf of the organization.

If that’s the job, and there are no widely used tools for doing the job, what do people do today?

What People Do Today

The financial firms who market ESG investments use the smartest people they can find – quantitative analysts (quants), to figure out what to do.

If they are good quants, they start assembling various tools to analyze the data they have been given. They usually assemble something out of various software components: databases, spreadsheets, scripting tools, and similar.

Smart people can quickly build impressively powerful tools by assembling easily available software components, typically open-source.

Compared to what could be done not too long ago, it can look like magic, and in some regards it is.

Unfortunately, this “researchers building their own powerful tools” pattern can be as counter-productive in ESG financial services as it is in other research-oriented activities.

These tools are used to make decisions of consequence. Unless there is some central platform and methodology for standardizing and governing how data is interpreted – and how decisions are made by all – the effectiveness of the overall function will be severely hampered.

These quant groups hit a wall – a wall they can’t get past unless they start to organize for effectiveness. That demands a shared platform for storing facts and what is known about them.

What does this ESG KPI from this agency mean, and what should I do about it? How have our people evaluated it in the past, and why? These are important questions, and should have ready answers.

This problem is not unique to teams of quants in financial firms trying to create ESG products. It arises anywhere there is an overwhelming amount of complex information that must be interpreted, assessed and acted on.

How do you know when your team has hit the wall?

#1 Your Team Spends More Time Building Tools Than Using Them

Smart quants know how to build and use tools. If the tools are inadequate, they will spend their extremely valuable time building, maintaining, and improving their personal tools instead of using shared tools productively.

Worse, these homegrown tools are used to make decisions of consequence, and you can’t see how they work inside: exactly how they are evaluating and interpreting facts.

As a result, a new form of risk is created that’s very hard to quantify and manage – as you essentially don’t know what you don’t know.

#2 Everyone Is Proud of Their Model

This can be a good thing, if innovation is built around a shared interpretation of facts and what they mean. Otherwise, it simply fragments organizational knowledge and creates new forms of risk.

There must be a process for gathering, synthesizing, inspecting, and governing shared knowledge, as well as a platform that supports it.

Otherwise, chaos will increase, sometimes exponentially.

#3 You Have a Hard Time Explaining What You Did

Investment decisions are frequently inspected in hindsight, and you may be asked to explain why you did what you did: what facts were available at the time, how your team evaluated those facts, how specific decisions were made, and so on.

This is also very useful from a continual improvement perspective. Being able to rewind the facts and what was known about them at a particular point in time often provides clear and valuable opportunities for doing things better.

Whether those questions get asked from a compliance perspective, a client perspective, or a process improvement perspective – having explainable and provable answers at the ready is very useful.

#4 It Takes a Long Time for the New Person to Be Productive

Competition for quant talent is fierce, and it’s assumed that there will be rotations. If it’s taking months instead of days or weeks for these people to be productive, it’s because there is no ready store of facts along with their interpretations and meanings to learn from.

If you can’t point your very smart and very expensive new person at a shared repository of facts, what is known about them, and what they mean, they will have to improvise. This takes the form of reading everything you can find, talking to lots of people, making a lot of mistakes, etc. – until the domain is mastered.

To be sure, your smart people will eventually figure it out – it will just take them much longer than otherwise necessary. Also, they will need to get busy building their own tools as others have done before them.

Your organization may think of onboarding in terms of network connectivity, benefits, etc. – how does your new team member learn about the subject matter: relevant facts and what they mean?

A Better Answer

The challenges faced by teams of smart people trying to evaluate complex data and decide what it might mean aren’t unique. These challenges arise anywhere decisions of consequence are being made: intelligence and military, life sciences, complex manufacturing, logistics, any aspect of financial services, and much more.

These teams all do the same things. They connect new facts to existing facts and knowledge, and evaluate the importance of the new fact: is it critical or not?

They create new interpretations of existing facts – new knowledge – that they want to share with others. And as people make informed decisions, they want to consume the available facts along with everything that is known about the facts and their meaning.

Databases store facts – data. Semantic databases store facts and what they mean.

Semantic databases use semantic AI to learn about facts and what they mean much as you would instruct someone new to the field: here is this thing, here is what it means, here is how it relates to other things, and so on.

In the ESG situation, the “fact” might be “agency A rated company B using KPI C a score of 3.8, +0.2 from the last rating three months ago.” If that’s the fact – what does it mean?

There would be knowledge about the agency itself: how it fits in the spectrum of all similar agencies, any biases, and so on. You’d want a precise definition of their KPI, and how it might compare to similar definitions. You would want to know how important that KPI might be to your clients, or perhaps a portion of them. The “3.8” would need some qualification – scale, criteria used, and so on.

Not to make our example needlessly complicated, but it’s easy to see how any quant or researcher would want to know more about how things were defined, why they were defined that way, what might be the inherent biases present, and so on.

This is “knowledge” in the abstract. Simply put, there is a lot of knowledge required to interpret that simple fact above.

From a technical perspective, a semantic knowledge graph (or SKG) is a great way to represent facts, what you know about them, and what they mean.

Storing data (the feeds from agencies) along with what you know about the facts (your interpretations stored via an SKG) results in a shared, actionable and trustable source of data, interpretations, and decisions made.

That’s the idea behind a semantic database – one place to keep facts and what you know about them, instead of just the facts by themselves.

A semantic database creates data agility, which is the ability to make simple and powerful changes to how data is interpreted by everyone. Data agility accelerates organizational learning, among other things.

What This Means for the ESG Financial Industry

Not surprisingly, being able to easily capture, improve, and share facts and what you know about them turns out to be a significant win-win for everyone.

Clients can make informed choices, using your proprietary interpretation of facts and what they mean, easily explainable and verifiable to anyone, anytime. As consumers evolve their ESG preferences, the financial services firm can easily readjust the portfolio of client ESG offerings to adapt and specialize faster than their peers.

Financial services firms specializing in ESG can now more quickly build repeatable, improvable, and governable processes around every aspect of their business: finding and onboarding new clients, making talent more productive (and doing it more quickly), achieving a simpler and more complete analysis of risk and profitability, and much more.

All built on top of facts and what is known about them.

Not to forget: larger organizations that have justifiable concerns around compliance, auditability, security, and related topics are often greatly relieved by the use of an enterprise platform vs. a collection of ad-hoc tools and methodologies.

Hard Questions

How do my people evaluate facts and what they mean? Can we capture, share, standardize and repurpose their knowledge about the data so as to continually improve our effectiveness?

If only our smart people spent more time using tools instead of building them, if only we had one version of documentable truth, if we could only be more agile …

If any of these things sound like you, we’d love to talk!

Comments (0)