The following talk was recorded at the Dubai International Food Safety Conference. Sharing public-private data for food safety.
Automated transcript:
Using Data Trusts to share public-private data for food safety.
Why Share Data?
Food safety data should always remain non-competitive. There’s a lot of fear and we’ve heard from other speakers today in this wonderful session to hear all the technologies that enable this. So why share it?
And so under the climate change theme ultimately the global food supply chain is vulnerable, though I could list a host of various triggers and drivers of food safety issues. The supply chain is built on shifting sands and it’s under pressure from various drivers, socioeconomic and beyond. This is one example of weather and climate change having a real tangible.
And so it is all July numbers 2023 this year where the land temperature in Spain went to 61 degrees Celsius. Now, maybe in Dubai, that isn’t out of the ordinary, but for Spain it is. And what’s the impact of that? One example was the decimation of the olive oil harvest.
So sometimes you need to have and experience tangible impact on your supply chain to affect change. What I want to focus on is a technology to facilitate that interoperability as previously mentioned, so we can share data to overcome a lot of these supply chain vulnerabilities.
This olive oil harvest is one example, and I wish to highlight a few points from the recent Global Food Security Index – four key summaries that they had in 2022,
- Food security trending downwards
- Affordability plummeting
- Food security gap widening
- Innovation is essential to build resilience
This ultimately acknowledges the fact that the food supply chain, food security is vulnerable and it’s changing and that’s the new norm.
Global Food Security Index
I’ve talked to so many people across the food supply chain over the last couple of years who are experiencing intense pressure due to the shifting sands. Some of them hope that if I’ll get over this short-term hump, then it’ll be fine, it’ll all go back to the way it was. That isn’t going to happen. This is the new norm. And so we need to build up resilience around this.
A ranking that they did and that comes with a caveat and on another day I’d be quite happy to say I’m from Ireland, we’re second so great job, well done, pat ourselves on the back and we move on. But the food supply chain is global, so the weakest performers are our problem. And so this again demonstrates is why we need to share data, to bring the collective wisdom and elevate the whole supply chain.
And as I said, make it non-competitive.
What is a Data Trust
What is a data trust? Is it a framework to de-risk the sharing of what is deemed confidential, private data. And bring it together for the collective good. And we’ve even heard from regulators and government authorities this week, who quite enviously, have said they know the industry has really good data, and ‘I’d love to see it!’. Sometimes it’s perceived that ‘never the twain shall meet’ but I will give you case studies where this can and does work. People experience daily tangible benefits, they get actionable intelligence. So it does work, these are the green shoots, maybe they’re greater than green shoots, but I don’t know what the next growing term is!
So is it an agreement to share data or a platform? Ultimately it’s both, so it combines the underlying software technology within a legal framework with a series of terms and conditions around data governance and so forth.
And who shares data?
Ultimately, as I said before, food security is non-competitive, then we need to because continuing the way we currently are isn’t an option anymore.
I’ll talk a little bit more about the underlying technology, but it’s both module and scalable, so there’s flexibility for it to breathe.
Who can see the data?
Data sharing in industry
We’ve listened to the stakeholders so there you can have an industry-only scenario where you can have a within industry, which may be divisions or a franchise. And please don’t ever believe that the larger the industry, the better they share data or host data.
I’m never surprised when shared data is still in a handwritten note. It’s 2023. and that still exists at scale. Thankfully that’s not a problem and we can deal with that.
Data sharing with regulators
We can also look at this from the regulator’s viewpoint, and I’ll talk briefly about a case study at the end with the FDA both within an agency and across government agencies. And the ideal scenario is that they ultimately come together in a non-competitive way.
About Creme Global
So I’ll pause here for a second to introduce the company. Creme Global is headquartered in Dublin, Ireland, and is a scientific modeling and data science company at the core. We have data science, maths and stats teams, data engineering, front end & backend software, food and nutrition teams, and all these core expertise is under one roof, to address the concerns of our customers to build something that works. It’s robust, it’s scalable, and it addresses a need.
So we can have a series of organizations or users that can submit data. And that’s as quick and easy as a ‘drag and drop’ style approach, in various formats into a safe, secure repository.
We can anonymize it at the source of submission.
We can shuffle the deck again and aggregate that data. It may be in the same domain, may be grouped around the same commodity, who now will be able to benchmark their activities or their food safety activities against others against the collective knowledge.
We can also supplement that with various other datasets, and we’ve seen some wonderful talks already about that and Fadi mentioned the huge amount of alert data, RASF, and so forth. And so we can pull all that in, and pool that, and then review it in your communication hub, which is your advanced visualization tool.
Ultimately as well, it’s built around that legal privilege which is inherently built into the system as well. And so then we can scale this beyond.
We can have various users within one organization and we can just add in more users. And there are various data engineering techniques like sectional access where we can protect your data to make sure it’s not contaminated nor seen by others until it is aggregated and anonymized. But now you’re starting to build a major data set and if you left it there, you may end up with noise. There are not many people who could gain all the key insights needed if you’ve got to review a 1 million row Excel manually. And I’m going to make a business decision on the feedback that you give me. That’s a high-risk strategy and spoiler alert, you won’t be able to do it! What we will do is we can build in technology that will do that for you. We’ll highlight the obvious ones, we’ll show you the non-obvious ones, and you can even end up with a dynamic risk score, a dynamic risk ranking, or a vulnerability score, especially in this constantly changing dynamic supply chain.
And then we can pull in the power of machine learning to actually move from insight to foresight and maybe for the first time move away from ‘putting out fires’ to staying ahead and predicting where those fires are most likely to occur.
User Provisioning
So a little bit about user provisioning they’re self-explanatory, but ultimately we’ve got control of that stage from submitter approver that can be within your organization. This is an additional safeguard inherently built into the Data Trust. And it may be that you don’t want your submitter to also view a dashboard and we ultimately control access as needed.
System Architecture
I won’t go heavily into the system architecture, but again, it’s designed around customer needs and removes barriers to participation. We can ingest data pretty much in any form. Some of them come with their pros and cons, but in any form, and especially when you move to using APIs, every other step on this screen is fully automated.
So there is nothing you need to do. All you do is maybe get an email alert. Maybe the vulnerability score has changed, or it prompts you to go to the dashboard to review.
Three case studies
So I’m going to go to three case studies in the last few minutes. And so the first one is the Food Industry Intelligence Network.
Case study: fiin (Food Industry Intelligence Network)
And this is again, a data collection platform, data sharing platform, a data trust. There’s legal privilege involved in this. It’s all based around food authenticity.
It’s been active since 2018. It arose from the horse meat scandal and the need for industry to share data was one of the main recommendations that came out of the Elliott Report, commissioned by the British government.
The membership is growing all the time, which is wonderful. Nobody now talks to me about what happens with data, and I’m fearful about my data. The only conversation I get is how do I upload, how and when. We have legal privilege within the framework and it works.
fiin members have a collective data-sharing system, and each has the power from collective knowledge. Individually you might do one or two lab tests a month, but there is no way you can afford or have the time to raise that to the 5,000 lab tests in a given food commodity space. Only then do you start to see trends and gain actionable insights far beyond your own testing dataset – without that, its virtually impossible.
This image is a simplified version of incoming data through the legal privilege aggregated insights. And of course, now the next step is to wrap that with some machine learning technology and various scientific algorithms to move from insight to foresight.
And these are some of the key benefits, I think I’ve already mentioned them, but ultimately its all around prediction and prevention.
Case study: Western Growers
Western Growers Association is nearly 200 years old, started in 1826 and often referred to as the salad bowl of the world. So this is all around the Salinas Valley in Western America, with 1.4 million acres of arable land, feeding America. I thought we did good farming in Ireland but this on a whole different scale! I believe this is the first time in their 200 years that the Growers have formally shared data with their traditional competitors. It’s okay to be competitive, but they’re sharing food safety data, non-competitively and gaining benefits and actionable insights.
As I said you can drag and drop files, and then through data engineering and ETL processes, which I won’t go into, the platform automatically updates your dashboard.
And they have various dashboards like this, where they can get the granularity that they need. And very quickly, we can adapt dashboards as needed.
One of the key benefits is reducing costs. And that’s okay to say that it’s okay that there is a competitive advantage. There needs to be change, but what we’re offering is a non-disruptive change. It complements existing processes. And people can and do buy into it.
Case study: FDA Seafood Data Sharing
And so the last example I’ll give is a very productive project that’s been ongoing for a while now with the FDA, and one that’s quite close to my heart. It’s all around seafood, where 94 percent of all seafood is imported into the US.
There are some phenomenal checks and measures currently in place but again as a regulator, as an agency, it also has to manage capacity and resources. What do I check where and when? And ultimately what we’re looking to do is offer them insights, collective insights beyond what they currently have and beyond what their current budget can afford or do.
fiin is seen as the gold standard in data sharing globally. In a recent discussion with Chis Elliott, he told me that he’s asked, time and time again, how can I roll this out? What I would say is contact me as the easiest step.
Why? Because it can be done, and each of the people behind the case studies are gaining actionable intelligence and making decisions through data. Because things have to change.