Cancel Preloader
Please enter CoinGecko Free Api Key to get this plugin works.

Demystifying AI’s Black Box: Ariana Spring and Andrew Stanco on How Blockchain Tech Can Shine a Light on Hidden Inputs

 Demystifying AI’s Black Box: Ariana Spring and Andrew Stanco on How Blockchain Tech Can Shine a Light on Hidden Inputs

Part of the magic of Generative AI is that most people have no idea how it works. At a certain level, it’s even fair to say that no one is entirely sure how it works, as the inner-workings of ChatGPT can leave the brightest scientists stumped. It’s a black box. We’re not entirely sure how it’s trained, which data produces which outcomes, and what IP is being trampled in the process. This is both part of the magic and part of what’s terrifying.Ariana Spring is a speaker at this year’s Consensus festival, in Austin, Texas, May 29-31.

What if there was a way to peer inside the black box, allowing a clear visualization of how AI is governed and trained and produced? This is the goal — or one of the goals — of EQTY Lab, which conducts research and creates tools to make AI models more transparent and collaborative. EQTY Lab’s Lineage Explorer, for example, gives a real-time view of how the model is built.

All of these tools are meant as a check against opacity and centralization. “If you don’t understand why an AI is making the decisions it’s making or who’s responsible, it’s really hard to interrogate why harmful things are being spewed,” says Ariana Spring, Head of Research at EQTY Lab. “So I think centralization — and keeping those secrets in black boxes — is really dangerous.”

Joined by her colleague Andrew Stanco (head of finance), Spring shares how crypto can create more transparent AI, how these tools are already being deployed in service of climate change science, and why these open-sourced models can be more inclusive and representative of humanity at large.

Interview has been condensed and lightly edited for clarity.

What’s the vision and goal of EQTY Lab?

Ariana Spring: We’re pioneering new solutions to build trust and innovation in AI. And generative AI is kind of the hot topic right now, and that’s the most emergent property, so that’s something that we’re focused on.

But also we look at all different kinds of AI and data management. And really trust and innovation are where we lean into. We do that by using advanced cryptography to make models more transparent, but also collaborative. We see transparency and collaboration as two sides of the same coin of creating smarter and safer AI.

(Ariana Spring)

Can you talk a little more about how crypto fits into this? Because you see many people saying that “Crypto and AI are a great fit,” but often the rationale stops at a very high level.

Andrew Stanco: I think the intersection of AI and crypto is one that’s an open question, right? One thing we’ve found is that the hidden secret about AI is that it’s collaborative; it has a multitude of stakeholders. No one data scientist could make an AI model. They can train it, they can fine-tune it, but cryptography becomes a way of doing something and then having a tamper-proof way of verifying that it happened.

So, in a process as complex as AI training, having those tamper-proof and verifiable attestations — both during the training and afterwards — really helps. It creates trust and visibility.

Ariana Spring: What we do is that at each step of the AI life cycle and training process, there’s a notarization — or a stamp — of what happened. This is the decentralized ID, or identifier, that’s associated with the agent or human or machine that’s taking that action. You have the timestamp. And with our Lineage Explorer, you can see that everything we do is registered automatically using cryptography.

And then we use smart contracts in our governance products. So if X parameter is met or not met, a certain action can proceed or not proceed. One of the tools that we have is a Governance Studio, and that basically programs how you can train an AI or how you can manage your AI life-cycle, and that is then reflected downstream.

Can you clarify a bit what type of tools you’re building? For example, are you building tools and doing research that’s meant to help other startups build training models, or are you building training models yourselves? In other words, what exactly is the role of EQTY Labs in this environment?

Andrew Stanco: It’s a mix, in a way, because our focus is on the enterprise, since that’s going to be one of the first big places where you need to get AI correct from a training and governance standpoint. If you dig into that, then we need to have an area where a developer—or someone in that organization— can annotate the code and say, “Okay, this is what happened,” and then create a record. It’s enterprise-focused, with an emphasis on working with developers and the people building and deploying the models.

Ariana Spring: And we’ve worked on training the model as well through the Endowment for Climate Intelligence. We helped train a model called ClimateGPT, which is a climate-specific large language model. That isn’t our bread and butter, but we’ve gone through the process and used our suite of technologies to visualize that process. So we understand what it’s like.

What excites you the most about AI, and what terrifies you the most about AI?

Andrew Stanco: I mean, for excitement, that first moment when you interact with generative AI felt like you uncorked the lightning in the model. The first time you create a prompt in MidJourney, or that you asked ChatGPT a question, no one had to convince you that maybe that it’s powerful. And I didn’t think there were many new things anymore, right?

And as for terror?

Andrew Stanco: I think this is a concern that maybe is the subtext for a lot of what’s going to be at Consensus, just from peeking at the agenda. The concern is that these tools are letting the existing winners dig deeper modes. That this is not necessarily a disruptive technology, but an entrenching one.

And Ariana, your main AI excitement and terror?

Ariana Spring: I’ll start with my fear because I was going to say something similar. I’d say centralization. We’ve seen the harms of centralization when paired with a lack of transparency around how something works. We’ve seen this over the past 10, 15 years with social media, for example. And if you don’t understand why an AI is making the decisions it’s making or who’s responsible, it’s really hard to interrogate why harmful things are being spewed. So I think centralization — and keeping those secrets in black boxes — is really dangerous.

How about excitement?

What I’m most excited about is bringing more folks in. We’ve had the chance to work with several different kinds of stakeholder groups as we were training ClimateGPT, such as indigenous elder groups or low income, urban, Black and brown youth, or students in the Middle East. We’re working with all these climate activists and academics to kind of say, “Hey, do you want to help make this model better?”

People are really excited, but maybe they didn’t understand how it worked. Once we taught them how it worked and how they could help, you could see them say, “Oh, this is good.” They gain confidence. Then they want to contribute more. So I’m really excited, especially through the work that we’re doing at EQTY Research, to begin publishing some of those frameworks, so we don’t have to rely on systems that maybe aren’t that representative.

Beautifully said. See you in Austin at Consensus’ AI Summit.

Edited by Benjamin Schiller.

  

Related post

Leave a Reply

Your email address will not be published. Required fields are marked *