Can your private data be as secure as your Bitcoin? In this thought-provoking episode of Trust Revolution, host Shawn Yeager sits down with Marks, co-founder and CEO of OpenSecret and Maple AI, to explore the future of digital trust. Marks unpacks how secure enclaves are reshaping privacy, evolving from the privacy-first Mutiny Wallet to a platform that safeguards user data and shields businesses from costly liabilities. This conversation illuminates a new frontier where AI agents and apps prioritize your security, delivering an encrypted, seamless experience that could redefine how we protect what matters most.
Marks, co-founder and CEO of OpenSecret and Maple AI, is a seasoned mobile app developer and privacy advocate. With a startup background, he’s pioneering secure enclave solutions to safeguard data and rebuild trust in the digital world.
https://podcast.trustrevolution.co
Music in this episode by More Ghost Than Man.
Marks, co-founder and CEO of OpenSecret and Maple AI, is a seasoned mobile app developer and privacy advocate. With a startup background, he’s pioneering secure enclave solutions to safeguard data and rebuild trust in the digital world.
- Follow Marks on X or Nostr: Marks@primal.net.
- Explore OpenSecret and follow them on X and Nostr: OpenSecret@primal.net.
- Try Maple AI and connect on X or Nostr: MapleAI@primal.net.
- Learn about secure enclaves at opensecret.cloud.
- Check out Mutiny Wallet’s open-source privacy innovations on GitHub.
https://podcast.trustrevolution.co
Music in this episode by More Ghost Than Man.
[00:00:03]
Shawn Yeager:
How are you, sir?
[00:00:05] Marks:
Man, I'm doing great. Yeah? Yeah.
[00:00:07] Shawn Yeager:
How was Bitcoin takeover at South by Southwest last week? It was awesome.
[00:00:12] Marks:
Takeover was the culmination of, like, an entire week of Bitcoin content. And by the and I was speaking at Takeover. So, like, the whole week, I was kind of dreading you know, excited, but also dreading when you have a speaking engagement. Yeah. And so I was just getting more and more tired every late night at a Bitcoin event, and I'm like, alright. I gotta have energy for Friday. But, it ended up going really well. I think, the talk was good, but also the content from all the other speakers, was pretty solid.
[00:00:40] Shawn Yeager:
Well, fantastic. What was any any particular takeaways of last week's sessions or interactions? What what's what's the vibe?
[00:00:50] Marks:
Yeah. What is the vibe? I mean, people are obviously excited. I think one of the general vibes is that there has been a shift now where last year people were worried about being thrown in jail. And now they're worried if it's going to be Bitcoin only, or if, you know, alt altcoins are gonna make it into the reserve. So it's been quite a shift in in in the worries that we have on our plate and the arguments that we're having online.
[00:01:16] Shawn Yeager:
Absolutely. Yeah. I mean, it is an embarrassment of riches in some ways, and yet keep pushing. What was the attendance like? I mean, what did you at at commons, was it full house,
[00:01:28] Marks:
usual usual suspects, or some new faces, hopefully? Yeah. It was pretty packed. I think they were trying to max out the fire code, so I think we hit about 300 people. There was it was standing room only in there. A lot of the usual suspects, a lot of new faces as well. I chatted I chatted with one guy who was a stay at home dad for years, and this was his first time kind of venturing back out into the marketplace. His he and his wife are swapping. She's gonna be stay at home now. And so he was looking to get out and network with people and has an interest in Bitcoin. And so I thought that was awesome. Like somebody who's just kind of interested and and trying to network, great place to start.
[00:02:07] Shawn Yeager:
Terrific. Yeah. I think, you know, we it's at least, over the last few years, given my focus in lightning and payments, it's been tempting to want to hope that merchants and small business and, you know, but I think it it begins with the individual. It's all about the individual, so that's great to see. Yeah. Well, speaking of which, let's dive in. I'm sorry. Okay. I was thinking to myself this morning, that I well, I still have mutiny wallet installed on my iPhone. I have not yet been able to to to remove that. But but maybe starting there not to to rehash too much of that. I mean, I'll make sure to cover in the show notes some of that backdrop for those who are interested. But, I mean, it was was and is, as it lives on an open source, a bold experiment in, broadly speaking, user control and privacy.
And so I'd love to understand, you know, coming out of that and transitioning or transforming, if you will, into Open Secret, what did you take from? What informed the creation of Open Secret from the, foundations of of Mutiny Wallet.
[00:03:17] Marks:
Sure. Yeah. So I wasn't there on day one with the company, Mutiny, but I was there as a user on day one. I was following along. I was right with you, you know, downloading the app, first running in the web as a progressive web app. And then once they launched on the app store, had that as well. And it was my lightning wallet of choice. I was using it for everything. And really the I think the the goal was to take this really complex thing of lightning and help people run it easily on their mobile device. But then the biggest unlock was. Being able to synchronize that across all your devices because lightning has to be always on in order to accept payments. It has to be, you know, have to be running your own server and that's what made it so complicated. So you guys thought, let's just try and simplify it and then people can have it, you know, in their browser. And when they wanna accept payments, how do we do that? Well, they innovated some ways to do like blind tokens and use e cash to accept payments for you when you're offline. There was really cool stuff that was going on.
But, the difficult part was it's a difficult user experience. And one of the biggest drawbacks was having users manage their own private key. And so when I joined mutiny, the goal was let's take this to millions of users, hopefully billions of users, right. At some point in the distant future. But it was like this non starter where if you are going to just a regular average user who just downloads apps and uses them, and the first interaction is write down these 12 words. If you lose them, you're gonna lose all your money. Like that's that they just get scared of that and go away. So we were trying to figure out how to solve that problem. And we started looking at secure enclaves.
The mutiny guys were at the sovereign engineering cohort number one in Madera. And while we were there chatting with some people and learning about secure enclaves in the cloud and thought this is a really interesting technology, I wonder if we can utilize this. And so as we were looking at Mutiny and scaling it, then, you know, there was a there were there were a lot of variables at play there, but it turned out that as a company, we needed to kinda pivot away from from scaling Mutiny and, try to do something else. But one of the something else's we wanted to do was to build an e cash wallet that could scale to millions of users. And again, we kept running this problem of the UX, the user experience. Like, we have to win on user experience. And we looked at secure enclaves and said, this is how we do it. We can use Secure Enclaves to manage the private key for the user. They don't have to worry about it when they first get in, but at some point you can prompt them and say, Hey, it would be great if you downloaded your private key, save it as a backup, you know, but you can do that when you actually have developed a relationship with them rather than right off the bat.
And as we looked around, there was nothing out there for using secure enclaves for mobile apps. They are used primarily inside large organizations for securing, like, internal process controls. And so we thought, okay, if we are going to use secure enclaves, we need to build a platform to do that for ourselves. And it was at that point we realized, well, instead of having just one e cash wallet or one mutiny wallet that uses secure enclaves, why don't we open this up and hopefully have hundreds of apps that have the same privacy, the same security that mutiny wallet or, you know, or such could have? So that was a long a long ramble there a bit, but that's kind of how we got to where we are today. Absolutely. And I think maybe for those,
[00:06:40] Shawn Yeager:
who aren't familiar, I mean, I think of right or wrong. I I think of trusted execution environments. I think of early Intel chip architecture or the what is it? ME engine. I forget some of these. But, give us will will you, mark a primer on secure enclaves? What what's what is it in a nutshell? What what is it, as a technology, and what does it seek to solve, or what does it, in fact, solve? Yeah.
[00:07:10] Marks:
So a lot of phrases you might hear. You mentioned trusted execution environments, TEEs, secure enclaves, secure elements. You might hear confidential computing. A lot of these are all referring to roughly the same thing. And that is, if you look at your phone, right, you have your phone in your pocket. It has a secure enclave and they've been in there for over a decade. And it is a secure chip where you can stick code inside of this chip and the code is running, but then the data that it talks to, you cannot see. So this data is encrypted before it enters into the chip. And once it goes into the chip, only the chip can decrypt it and then run the code against that. And then when it spits it out the other end, it is re encrypted again. And so no human has the ability to intercept that, that data in a plain text state. So effectively, that's all it is on, on your device. It is securing your thumbprint, your face ID, your wallet, driver's license, that kind of stuff.
But then they've moved into the cloud now. So Intel was one of the first with Intel SGX. If you do any research, you'll find that there were some vulnerabilities with that. That was like the older generation. Now they've moved on to the Intel TDX. Amazon has one, they call AWS nitro and video has their own that they're doing on top of GPU's. And then, Amazon recently announced their own GPU thing just, you know, like very recently. And I'm curious to see what they're gonna do as far as encrypted GPU's for theirs as well. So there's a lot going on here, but effectively you can just think of it as the, the analogy I like to use is if you're in the kitchen and you are baking your favorite cookies, right? Well, you're gonna get a recipe and I'm gonna give you a recipe to follow.
And, and that recipe is like the code that you would put inside the secure, the secure enclave. So you take my recipe, but I'm not there in the kitchen with you watching you actually bake. And so I don't see which kinds of ingredients you use. I don't see which kind of flour you use, what kind of chocolate chips or, you know, which kind of milk you use. So those that data that you put in there is private to you. But the recipe that is being used is open. It's out on the internet. Everybody can see which recipe is being followed. So it's analogy that seems to kind of resonate with people that, that really the secure element, the secure enclave is, is a verifiable code that's out in the open for everybody to audit, but the data that passes through cannot be seen by anyone.
[00:09:35] Shawn Yeager:
And I would imagine you have run through a few dozen of these metaphors to lock into one because it is complex. It is challenging Yeah. Yeah. To get that across, but great job. I like it. And as you say, Mark, we've we've been carrying these, pieces of hardware on our phones at least for a decade. What have been some of the most notable breakthroughs? You know, and I'll I'll throw in that, correct me if I'm wrong, I believe wallets, the digital wallets that we carry. I'm I'm an iPhone carrier and person, so I am not as deeply familiar with Android, but I believe it's safe to say on both sides, differential privacy that I know Apple has put at the center, be it in in the way it it, applies AI to photos and the like. Can you can you paint a picture there of of, you know, what has it brought us, let's say, over the last five to ten years? Sure. I think that they're getting the benefit of, and that is these, ephemeral credit card numbers.
[00:10:36] Marks:
So it used to be I don't know how often you've had to change your credit card number because you had some kinda charge that happened, But, it's happened to me multiple times. And so I'd call up the credit card company and say this charge was not mine and they would have to give me an all new number. Well now, with the, the, the wallet that is on there secured by the Enclave, Apple's able to generate a new credit card number for you per transaction per per vendor. And so I think that's a big benefit. So now if one is compromised, they don't have to change your entire credit card number over. Another very notable case that was on the news, when was this, like 2015, '20 '17, was the, the shooter incident in San Bernardino, California, where there was a person who, you know, unfortunately was engaged in, in a shooting event and the FBI wanted apple to unlock the phone for them. They couldn't get in, they couldn't get in through the pin. The secure enclave was securing the identity on the phone and apple stood their ground and said, we're not going to compromise the device. Basically the FBI wanted them to shift, to to to ship, modified firmware to the phone that would unlock the secure enclave.
And Apple stood the ground and said, We're not gonna unlock the enclave because that creates a backdoor that's vulnerable for anyone to get into. And so this, this was a test of the enclave showing that even the FBI, like, was having a hard time getting in and finding this information. I hesitate to use that as an example because it's such a tragic event. But it was one to show that that the enclaves are, you know, are are really secure in that regard.
[00:12:11] Shawn Yeager:
So and I think, you know, as we could spend a lot of time talking about it is whether we look to The UK with them now trying to yet again force a backdoor on Apple devices and Apple, deactivating their their iCloud, end to end security and response. It is those extreme edge cases that test either the, viability of the technology or in this case, perhaps the commitment of a technology company to to, use it for for the good of their customers and and not create what we know that does not work, which is a backdoor for one, but not for all. So so from from what we've seen secure enclaves enable, what remains?
Or rather, what are some of the more egregious challenges, problems, risks to trust and privacy that we endure that you foresee secure enclaves addressing? And certainly, we'll talk about Maple AI.
[00:13:10] Marks:
Sure. Man, there are so many, so really it's, it's kind of this, this upgrade for security on the internet that I think that we are about to embark upon. HTTP, everybody types in the URL, they go HTTP, you know, colon slash slash w w whatever. And that was all just out in the plane and it was insecure. So then when all of our banks started coming online and we started having usernames and passwords, we realized we needed this to be secure. So we upgraded the entire internet to HTTPS. It took over a decade to get everybody on board, but we eventually made it. Now pretty much everything goes secure.
However, that only secures data in transit. Right? So you're using your password. If you're on a public wifi trying to log into your bank, it it is relatively safe to do that. However, once your data gets in the back end of that bank, or let's just talk about some random app you have on your phone, you're doing you're going on a run around your neighborhood and tracking your location constantly, you don't really know who that developer is. You don't know who they hired at their company. You don't know who they recently fired at the company who might have taken data from their servers. And so you're trusting them with your daily location, where you go on a run that could leave you vulnerable.
And that data is not encrypted to you. It is open to them. They can share that with advertisers. They can share that with the government. Data hackers can get in and steal that. So it, it is this privacy trade off that we don't quite think through a lot. We don't realize that we're doing that, but if you look at every app on your phone, you're effectively leaking your data to unknown people all the time. And so I think we are going to witness a third upgrade now to the internet, and that is secure enclaves. I'm I'm calling it HTTP S E for secure enclaves, but effectively, I think every app needs to start using enclaves to not only protect user privacy, right? Where you can encrypt every single user individually, but it also protects the developer and their liability.
In 2024, we did some research on, on data hacks that happen, data breaches. And the average cost to a US company when there was a data breach was about $9,000,000 In 2023, it was lower. It was like in the fours or fives. But that's a significant liability that you're hanging onto as an app provider if you are custodying custody, that's a hard word, if you're taking custody of user personal identifiable information. And so we think that secure enclaves are gonna be a way to to protect businesses and users.
[00:15:52] Shawn Yeager:
And I think I was listening. I was after a walk earlier today listening to a conversation. I believe it was Kale. And, I don't know the other. It was just one of these, you know, you grab fountain podcast app and do a search and and and hit play. And it was about certainly a a conversation I'm hearing happen more often, which is from don't be evil to can't be evil and from personal data as an asset as the new oil goes the cliche to an absolute liability. So, you know, collect it all to collect less to collect none. Talk to me a bit about that. I mean, where could it go? Where are you seeing some of the more progressive early adopter, you know, customers, companies, start to poke around in terms of their ability to collect less or collect none?
[00:16:47] Marks:
I like the I like that phrase change from don't be evil to can't be evil. Cause if I can be evil in any way, like if that's a possibility, then there's the possibility of me getting a knock on my door at 03:00 in the morning from, from the government, to unlock someone's account. And I just don't wanna have that liability on me. I've got a family, I've got a life to live. So I would rather create an awesome user experience for our users without, knowing what they're doing effectively.
[00:17:16] Shawn Yeager:
Is it overly hopeful to think that there is a trend in that direction from collect it all to at least collect as little as possible? And I in America, perhaps other other places, it's different. But I think I I carve health care aside, you know, the HIPAA, act and and some of the related legislation at least creates the facade. I don't know that they're they're a sterling example of protecting, you know, PII or personal health information. So there or elsewhere, are are you seeing a trend or are we very, very early and and certainly Open Secret is is looking to push those boundaries? So I think we're early in many regards.
[00:18:00] Marks:
Before we decided to pivot into Open Secret, we had meetings with a handful of developers, call it 10 different developers, from just the regular mobile app space. I come from mobile apps. I did, I worked at a few different startups as an early employee. And so I've been doing mobile development since really before the app store even launched. And so we had a lot of conversations with people and it was about fifty, fifty, Half of them were on board with the idea that they wanted to provide users their privacy and also protect themselves from liability. The other half told us, Yeah, I'm not super interested in that. Part of it is that they actually like the ability to monetize their user data.
And so that's, that's been a big part of the business model for so long. Advertising, user data monetization, you talk about data being the, the new oil. So I think it's gonna take time to migrate away from that. And there's also, there could be a hybrid approach where you actually, and this was a couple of the developers who talked to said they would like to take this approach where you lock down the most sensitive information, but then you still can share non PII within the system and provide trends and analysis and maybe do the advertising model on top of that. But you do create this separation, which is extremely important. And so that's really where we are trying to start is that if you're an app developer starting out building an app today, you don't even, most of them don't even think about privacy. They don't look long term to say what's gonna happen if I have all this user data, they're just concerned with, can I build this user experience and will I get downloads, right? SQL and a Docker container and just start grabbing everything.
Yeah. When you look at tools like Replit or Bolt dot fun where people are just typing in prompts now, and they're just vibe coding into an app. And then they put it out there and ask people to start using it. That's what they're, that's what they're concerned with right now. And they don't think about the fact that if this thing goes viral and takes off, they might suddenly have a database full of this information that they really shouldn't be holding onto. And so we would like to create that same experience, but have privacy turned on by default, have encryption turned on by default. So when they Yolo into something and vibe code and all the buzzwords you wanna say, right? Like when they, when they get into it, they are protected from day one and their users are protected from day one. There's no reason why it shouldn't be that way. The only reason is that hasn't been built yet. And so that's why we're building it. It's it's
[00:20:26] Shawn Yeager:
the foot guns are are there waiting, I think, when you when you take that approach. Well, I think that's that's a great segue mark into your sister company or or product at least Maple AI. You'll correct me on that. It is no surprise certainly to me that LLMs and AI where you began because of the sheer volume of data, the uptake utilization. I spend probably an hour a day, in in various AI products. So how does Maple AI make these interactions more trustworthy, if that is the right word, than what's out there today, be it a Grok and OpenAI or others? What's what's fundamentally different?
And and second, I'd love to know your take on today, at least, what are we sacrificing to get those benefits that in this case, Maple Eye Maple AI and Open Secret deliver? Mhmm. What's the trade off?
[00:21:26] Marks:
Well, Maple Maple was really born out of those developer conversations we had when we were pitching open secret to developers. Pretty much a % of them, close to a % said, hey, if you had some kind of AI app where I could chat privately with AI, I'd be all over that. And so that's why we decided to build that. It was, it was a way to figure out if open secret is even possible, because we didn't wanna build a platform for developers before understanding if the UX was even possible to accomplish. We wanted that rock solid UX of download an app, log in with email, log in with, you know, whatever OAuth, Google, Twitter, or what have you. And now you're in with a private key and end to end encrypted.
And so Maple was our way to prove that. And, I mean, I'm really happy that we did because it's a phenomenal app. It's really easy to use and it, it helps with those trade offs that you're alluding to here and that you're asking about. And that is when you go use ChatGPT or Grok, they're amazing tools are phenomenal. They do a lot of great things. They have a lot of great functionality, a lot of functionality we don't have in maple yet, but the trade off you're making is you are sacrificing your privacy to the companies. So as you type in your information, everything you type is, is stored permanently on their servers and then used to train their models in the future.
Now, for some people, I mean, so many people I'm sure you've run across this. So many people you talk to, they they don't see that as a problem. Right? I don't have anything to hide. Why do I, why should I care if I'm giving up my privacy? But our privacy is really important for maintaining open societies, for maintaining open markets, for maintaining this freedoms, these freedoms that we enjoy. Because when we give up our privacy, we allow authority. We'll, you know, we allow people to come in to start inserting themselves in the process, and they can create roadblocks. They can create censorship. They can shut us down. So we need to maintain that barrier.
[00:23:20] Shawn Yeager:
Now from the phrase, the the right to selectively disclose. Mhmm. I don't know if that was Snowden or far, you know, long before him, but that's the phrase that sticks with me is even if you presume you have nothing to hide, it is maintaining defending the right for selective disclosure.
[00:23:38] Marks:
Yeah. It's a it's a great quote. So that's that's really where we're at with with AI and with Maple. And from a practical standpoint, the more that you tell to these other services, the more that it can be used in the future, which can be convenient, but then you have to realize you don't know who is at that company again, right? Open AI hires dozens of people and employees. So many individuals, and you don't know who is accessing your data. And it might be very personal information conversations that you might have just between you and a family member, right? You and your SIM significant other, you are now sharing with, with random people.
And so maple AI gives you that ability and, and high level, what maple AI does is it creates a private key for you in the secure enclave. It encrypts your chat that you make on your device, and then it sends it to our servers. And then not only do we have that encryption happening in the, in the enclave, but we also have a second one, which is encryption on the GPU. So we have the open source large language model booted up in the CPU, in the GPU. And so your request is passed along encrypted to the GPU as well. So nobody is able to see what is being chatted about. The GPU comes up with a response with the LLM, sends it back, encrypted back to your device. So the whole pipeline is encrypted, which means that we have no censorship in there. We don't sanitize anything. We just give you a raw open source model and let you talk to it. Now it is based on Meta's llama 3.3 at the moment. We want to bring other ones online soon. We're we're working through that. We actually currently have an LLM provider that we use.
But because they use GPUs that are encrypted, we can, we can verify the whole process and know that they are end to end encrypted as well. So we're looking at how we spin up our own encrypted models. But, but yeah, you, the, the only kind of censorship you might get is the model itself, the data that it was trained on, right? The biases, the weights that it has in it that might tilt one way or another politically. But we do our best to we wanna find models that are the least, you know, politically motivated as possible, and then we don't do anything to insert ourselves in the middle of that process.
[00:25:58] Shawn Yeager:
And what I mean, I I when I initially signed up for Maple AI, I used I think I used GitHub, authentication, and it was absolutely seamless. There was there was absolutely nothing about it that felt different other than understanding a bit more about what was going on behind the scenes. Talk more about those well, perhaps the trade off is in the model and what's required. I mean, I you know, I'll I'll shamelessly, sort of say that grok is my preferred at this point. It's just mind blowing for various reasons. So, you know, where is the line, I guess, in terms of what's possible with open secret and and secure enclaves?
And where is your place in that ecosystem as you see it today?
[00:26:44] Marks:
So I I'm definitely not going to be building my own data center and doing all the cool stuff that Elon's doing. I think it's it's it's top notch what they're trying to accomplish. So for us, I, I don't look at maple AI as like the only AI tool that people should have. There are so many tools out there. And so it should be one tool in your toolbox. You know, if you are trying to fix something in your house, you're gonna go out in your garage. You're gonna open up your drawer. You're open up your box. And there are 20 or 30 different tools in there, and you need a different tool for a different situation. And so I use Grok all the time. I use ChatGPT.
I use Venice AI. I use a lot of other AI services for different purposes. But I'm so glad that I have Maple there as well, because when there are chats, things that I want that are just should be more private, then I have that tool to use. And so Maple, it actually has a very long roadmap. We have a long strategy for Maple. It, it started, you know, it was a glint in our eye as a proof of concept, but then very quickly we realized there's so much more here. So we are going to continue adding features into maple. It is going to become more feature rich. It will be someplace that can sit side by side with chat GBT and some of these other services.
But then Maple is also a service for other app developers. And so it plays into this whole ecosystem of Open Secret. Let's go back to that, that running app that you might be using. So you're tracking your runs around your neighborhood. Well, if that developer is built on Open Secret, they're using it to secure your location data. So now your location data is private, but what if they want to use AI to help you make decisions about like, how can you have a better running schedule? Maybe you want to run a marathon. It's like, okay, based on how your running history has gone, let me use AI to, to build you out a plan to get there. The current option right now is for them to grab an API key from someone like ChatGPT.
And now they have to break that privacy wall. They have to, to, to take your private, really sensitive information and go give it to OpenAI. And that is not a great, that is not a great compromise to make. And so Maple is actually gonna be part of the it is part of the open secret ecosystem. So they can now have a private AI to chat with, to bring within their app and stay within that privacy ecosystem. So we see Maple as both a really solid consumer app and business app. We have a lot of business users on Maple already. So that we will continue to iterate there. And then as that gets stronger, it is going to be stronger for the developers who are using private AI and vice versa. When developers ask us for things that they want to do in the private AI space, that will go into Maple AI as an end user product, and both will get better at the same time.
[00:29:28] Shawn Yeager:
Terrific. And that makes a ton of sense. In fact, you touched on one of the questions I wanted to ask, which is, it seems to me, I'm no expert here, that in enterprise use, certainly OpenAI and others, and, you know, not to not to speak ill of any of them, but they've got their team licenses every time I sign into one of these products. They they wanna push me to upgrade the team to a team product or team license. That to me seems particularly problem some in terms of rags or otherwise training these models on intellectual property that is, you know, belongs to the company, sensitive documents, whatever that case may be. Talk to me a bit about what that looks like. And here too. I mean, you know, are are you bullish on on enterprise uptake, or, you know, do they suffer some of the same challenges collectively that individuals do, which is, it's not a problem. We'll just give it to OpenAI.
[00:30:24] Marks:
If you if you go look around on Reddit and some of the forums, you'll find countless people saying that they have no problem sharing their company information with open AI, which is unfortunate. Right? That's just the nature of it. So some people don't realize, like, it's not a they they they just think it's not a big deal. I would question whether those were officers of the company hanging out on Reddit saying those things. But, yeah, probably not. Probably someone who, you know, doesn't get perp walked out the front door if they if they, break a reg. Yeah. That's right. And I can understand where they're coming from. Right? Maybe you're running some small business in a town, and and if you are thinking that sharing information with ChatGPT, that the threat is ChatGPT is gonna come take over your business, Like that's, that's the wrong threat vector to be thinking about really what it is is that you are training chat GPTs models to benefit your competitor.
That's in the same town as you, because they're going to go to chat GPT and start asking questions about, Hey, how do I spin up this thing? How do I do a marketing campaign in this market? And if you were using it six months prior or a year prior, your data is now part of that, that brain that is going to make recommendations to your competitors. So you're making the system smarter to not only benefit you, but benefit those around you. And as a business person, you, you need to make that call, Right. Do you want to improve it for your competitors or not? Whereas with something like maple, you can, you can bring your chats in there. We're adding document upload soon. So you'll be able to upload your documents and do kind of your own rag type stuff.
And you'll be able to go over strategy, marketing plans, customer service ideas, right? Documentation. There'll be all sorts of things that you can do within maple that won't be shared with anybody. They will stay local to you and private to you. And that's, that's really a big competitor advantage for business users is that they can have this AI that they, they, they bring into their strategy room. Right? You bring the smartest people in your room, you close the door and you have a conversation. Well, now you can bring Maple in there too, and trust that, and you can verify. You don't have to trust us. You can verify cryptographically that that your information is not being spilled out anywhere.
[00:32:37] Shawn Yeager:
And is there I I think I know the answer to this, but I'll ask it. Are there clear trade offs in the sense that what you just described, which is fantastic, sounds like I can have my cake and eat it too. I can benefit from models trained on this massive corpus of of of data and information across the Internet. You know, Grok presumably excels because it has real time access to to x. So I can take advantage of of that baseline, augment through my own documents, my own data. Are there obvious, or do you foresee places where it breaks and where, you know, the the YOLO approach of an OpenAI or whoever, outpaces those who've who've elected to apply your similar technology. So are there are there trade offs that are apparent? The AI
[00:33:31] Marks:
race right now is very exhausting. If you're online at all, every week there are new advances happening. So I definitely feel the, the anxiety and the struggle of like, if we're going to have a product like maple AI, we have to be able to keep up in some regard with all the advances that grok and open AI are making deep seek, Mistral there. Everybody's throwing out new stuff constantly and doing great work. I think what really what we need to do is we need to provide a base level of features for our users and do it in a way that is private.
And then we kinda have the benefit of also building Open Secret that has Maple built into it. And we would be extremely fine and happy if a developer came along and built a, a maple competitor on top of open secret. I would have no problem with that because the, the technology's all there. The tools are all there for them to do that. And if they wanna go after a market, like if they wanna do image generation, for example, we don't do image gen inside of maple. There are reasons why we haven't done that yet. We might do it in the future, but if somebody wants to come build that on top of open secret, by all means have at it, make it an, a phenomenal business for yourself, scale it to millions of users. That'd be great. And so really, we we're looking at as we want to use maple ourselves. I use it daily. I use it for my own. Like I have a personal assistant prompt that I use to to prioritize my day. I use it for doing strategy, for doing marketing, all sorts of things. And so I wanna continue to use it, and we will continue to keep maple usable for business use cases As long as there isn't another competitor that is as private. If, if, if somebody builds something as private and may as maple and is more full featured, then at that point we start to look at the cost benefit of maintaining maple or, you know, winding it down. But I don't think, I don't think the, private AI part of open secret will ever go away. That's going to be a mainstay of it. And so there will always be some really strong private AI functionality in open secret. And there will always be end user apps on open secret that provide private AI. Don't know if that answers your question, but that's It does. Absolutely. The thought. And it's it's and it seems to me that I have these conversations often, particularly with friends, colleagues who are developers
[00:35:54] Shawn Yeager:
who push back, you know, and they've they've heard the examples of just half baked code that cursor or something, you know, not to pick on someone, but has spit out. And they've got to now spend 10 x the time trying to debug it or get it back up and running. But I think it's inevitable. And so in my mind, LLMs, AI broadly, agents are now a foundational element of any sort of, system. And so I absolutely hear you there. What comes next? Know, if you're willing to give us a glimpse of of what's next in the roadmap or, you know, if you had your way, who would come knocking on open secrets door and what would they be looking to build?
[00:36:39] Marks:
For me, the the lowest hanging fruit app ideas out there are note taking apps, journal apps, people who want to write down their thoughts, right. Their, their daily thoughts or, or their internal struggles. These are things that are very personal and private. I think those are going to be the first apps that that could be written.
[00:36:59] Shawn Yeager:
And then you plug in. Yeah. Obsidian.
[00:37:03] Marks:
One of the apps we talked to was,
[00:37:07] Shawn Yeager:
day one, day one, right? You're just gonna be a heavy user. And when they got when they got dicey on their data policy, I stopped.
[00:37:15] Marks:
Yeah. Right? And they did build private end to end encryption into it. Right? It took them a long time. I was good friends with those guys. I was actually at at Apple's developer conference when they won their first apple design award. I gotta hold it, take a picture while it was fun. Right? So they they're they're great guys. But we chatted with their main developer. He's not with them anymore. And he said if open secret was around when they were building it, it would have taken, you know, like a matter of weeks, maybe a couple months total to implement it. Whereas it took them a year and a half because they had to build it for iOS. They had to build it for Android and for web, and they were figuring it out as they went. Whereas open secret is just build it once, run it everywhere and you get really strong encryption.
So I think those are the first apps and obviously AI is a big part of it. So there's gonna be a proliferation of AI apps. And then you mentioned AI agents. That's really where we're headed. There's, you know, AI for, for those listening. If you, if you've heard AI agents, but you're still confused, what the heck are they? They're effectively AI bots. They are specialized AI that know how to do a task. So if you are hungry for dinner that night, right, you would go into your main AI, call it Chat GPT or maple. You go and say, I'm hungry. Get me some food.
Chat GPT doesn't know how to like fulfill your wishes there. All it can do is say, here, let me give you the steps on how to figure out what you wanna eat, and then here are the steps on how to order food on DoorDash. And that's where it stops. But with agents, ChachiPT or Maple or some other, AI will be able to say, okay, here's what you want to eat. And now let me spin up an agent that knows how to go find restaurants in your area. And then it's going to spin up another agent that knows how to order food on DoorDash. And so it will go spin those up, but those agents are not free. Right. They cost compute credits, power, electricity, all that stuff. And so we are going to need to have a way to pay all of them. And that's where e cash comes in. You know, you have lightning, you have Bitcoin, you have stable coins, you have credit cards that are trying to do this. Don't think credit cards are gonna be able to shoehorn themselves into this new paradigm of of AI pain ease pain each other because we're talking about three and a quarter points per transaction plus 30¢ sat down. Yeah. Yeah. Exactly. And we're talking about, like, fractions of a penny per transaction, maybe a few cents, really. It's a few sats is, is the term that we should be throwing around. And so what we need is some kind of wallet that's attached to your AI that can then pay other AI agents and you just you authorize it. You say, okay. Here's your budget. You got $50 in here, and now just use it up as you need to, and then I'll replenish you in the future.
And I'm I'm hopeful and bullish that ECash is gonna be that solution. I think it was built for this,
[00:40:00] Shawn Yeager:
and, it satisfies a real need. And so Let me if I may, Mark, let me pause you there. I mean, I think on the topic of trust, and I'm I'm dating myself, but I think back to the nineties, the beginning of my career at Microsoft, and we were we were professing intelligent agents. You know, you stick around long enough, these things always come full circle. But now as then, I think it comes down to what will it take for me to trust to let this thing loose? You know, even even I've got a computer science degree, been in tech for my entire career. I pretty deeply understand the technology. I still don't know that I know what it will take for me. What do I need to qualify and quantify giving this thing a a wallet of budget, which, you know, on Nostra Nostra Wallet Connect, I can set budgets. All these things are great. I understand that. But I mean, that's sort of the crux of it, I suppose, is how far away are we, do you think, from this being in the wild?
And what is your take on how Secure Enclave's Open Secret specifically addresses that gap, that trust gap of, okay. Yes. I'm gonna give it a leash and let it run. Yeah.
[00:41:13] Marks:
There, in my mind as I'm listening to you, I think you are describing two different definitions of trust there. So the first one is, do you trust it to be effective? Right. Do you trust to, to let these AIs go and do the thing that you want? And so if you want to eat food for dinner that night, do you trust that it's actually gonna bring you food that you like, or is it gonna bring you some styrofoam that you don't wanna eat at all? So that, and that is something that we can just play out through the user experience and trial and error and make sure that we, we trust that that's gonna happen. Then there's the deeper version of trust of, do I actually think that it's going to be benevolent and it's gonna act on my behalf, that it's not gonna have bugs that will suddenly spend my entire wallet in one swoop.
[00:41:58] Shawn Yeager:
And I think And I think hallucination in the in the in the age of LLMs is perhaps for me at least reinjected that concern. You know, you see how far off the track these things can go.
[00:42:10] Marks:
Yeah, definitely. And I think agents hallucination is definitely a big part of chat. And I think agents have an opportunity to be more transactional where they can write unit tests and they can, they can prove things and say, if the input is this, make sure the output is this and don't stray from that. And so I think we might be able to have more provable trust in, in that kind of design. But, you know, you mentioned working at Microsoft. I used to work at Apple when you're in these big corporations, you have these, thresholds of payments that can be approved. Right? So all these different managers you have above you, they each have their own budget level, right? Discretionary spending. They can approve up to $250 The next person can approve up to $250,000 or something.
I think that we probably end up in some kind of space like that. And you're seeing that with Nostril Wallet Connect, being able to set a budget for different apps. So maybe you say, right, like, I'm I'm okay with you making decisions up to this point on your own. If it needs to go above that, come back for me for approval, whether that's a push notification that shows up on your phone. I mean, how dope would that be if you're, like, sitting in a movie or sitting somewhere at a at a ballgame and your phone says, oh, hey. Your your AI agent wants to do this really important thing with for you. Do you approve it? And you're like, oh, yeah. For sure. Go for it. Absolutely. Maybe it's negotiating your car insurance for you. And it's like, hey. I just found that I can save you $500 a year by switching to this new one. Do you authorize me to do it? Yes. Yes. I do. Yes. I would say yes to that. Maybe it gives me a little quick report right there on the screen and I can say yes to that. So. I I think we need to figure out how to, how do we set up these guardrails so that we do feel comfortable, you know, at that level of trust. And then with open secret, what we're doing is we are putting the open source code out there.
This is the code that runs inside the enclaves, and then we have reproducible builds. So any user that wants to can go out, download the code, run the build, and they get a check sum. They get this number that says this is, you know, this is the fingerprint of the build. And then when you go log into Maple, you actually see the fingerprint that the server is doing. It's called attestation. And you as a user can compare those two fingerprints together and verify that they are correct. Now the software does that for you. It gives you a green verified badge, a little check mark, but users are able to do that. And to me, that's kind of the final piece to this trust that, that you're asking about. Is we need to see the code. We need to know what's running on these servers, running in the secure enclaves and with companies like Apple, with their private cloud compute and their Apple intelligence, they're using secure enclaves. However, Apple doesn't wanna open source all of that. And so they're going to third party auditors, and we have to trust the auditors are looking at the code and verifying it. It's the same when same thing with Signal. Signal has third party auditors that come look at their code. And so if you're gonna use signal to chat, you are doing some trust, right? Because you download the app from the app store.
You can't verify even though they even though signal publishes their code open source, you can't verify that the build you're downloading is the exact same code that's that's out on the open source. You you can hope, but now you're trusting the third party auditors that they are saying, yeah, we checked out this build. It is the same code. So there is a, there's a level of trust there. And with Open Seeker, we're trying to get as close to the user as possible with that trust. We wanna bring the code and the user as close together. So that's, that's really what we're going after. And I see there are gonna be obviously companies do a different way and different ways, and you have to decide as user for yourself, what is your what is your risk, model that you have? How much can you trust someone, and how much do you need to have more eyes on the code yourself? And I think that that brings to mind I mean, I I wonder,
[00:45:53] Shawn Yeager:
Mark, if you and the team didn't have a lot of interesting conversations about business model and monetization versus don't trust verify. Was that non trivial, or was it clear from the beginning that you were gonna push it as close to the user and and have as much visibility and transparency as you've laid out? Because that does seem to me to be at least the the reason, if not excuse, that a lot of established companies hold up.
[00:46:22] Marks:
Mhmm. This definitely was a big part of the conversation. We we very strongly feel like we need to have our code open source. However, we're trying to build a business, especially a SaaS business. And it's a threat right. To where your software can be copied and someone can spin up a competitor to you. So we looked at the licenses out there. This was, this was controversial within our company, in our discussions of which license do we go with. We eventually went with a GPL for the, the secure enclave code, maple itself, the client that is not, that is MIT. People are welcome to fork that and do their own commercial products if they want to, and they don't have to contribute back. But we did H E P a GPL for a few reasons.
One of them was we are trying to build a business. And so if people are going to copy our code and are going to improve it, we would love for those improvements to come back to us and for there not to be just like a straight competitor right off the bat when we're so small. Another reason is we feel very strongly that secure enclaves to be effective, need to have their code open source and reproducible. And so if someone's going to take the, the, the open secret enclave code, we want them to publish it open source as well for the protection of the users. If, if our code is gonna be securing people's stuff, even if it's on another person's device, another person's server, we want it to be out in the open so that users can be protected.
So that's that's really the approach that we took. It was not an easy decision to make because we wanted to provide the the strongest privacy possible, but that's that's eventually where we landed. And I I think that takes us to
[00:48:05] Shawn Yeager:
the last question I'd love to to discuss or raise with you, Mark, which is speaking to business leaders, be they heads of product, heads of engineering, CXOs, in a future where AI certainly, LLMs, I think we can we can foresee that become ubiquitous, other compute intensive and increasingly data hungry applications become ubiquitous. What changes do those incumbent players need to start deliberating and deciding on in order to to keep pace and presumably to make the decisions that that I think you and I would agree are best for the user and provide the trust that that they'll demand. Trust and privacy
[00:48:55] Marks:
have been a nice to have for too long. For decades, they've been a nice to have because in order to build effective cloud environments and to monetize a SaaS product, you, you almost needed to throw privacy out the window to really make it effective. The technology has finally caught up with things like secure enclaves and with devices having so much power on them, you can run LLM on a mobile phone now pretty effectively. It won't get you everything. The models that we run-in the cloud are still way more powerful, but you can get, enough sometimes of what you need on there. So the computing has caught up to this and now we are going to see users caring more about privacy.
You look at the political landscape and just what has happened in the last few months, more people are talking about privacy and and trust and what does that mean for themselves. I'm not talking just in America. I'm talking, you know, all over the world. Europe is a big example of that where privacy is is really hanging. It's a threat. You're attacking that continuously. Yeah, definitely. So I think if you are at, if you're at a company, if you have an app that's being used by a lot of people, you need to be watching for a competitor come out that has the the trust and the privacy angle that you don't have? And how can you adapt to that? With open secret, we don't require people to throw away all their code. We can plug into an existing tech stack.
So that, that was really important to us that we don't require people to start from scratch. So you could come, come to talk to us and we can help you start to, you know, secure just pieces of your data. But you don't have to, right? You can download our code and do your own secure enclaves. But I think that every, every company out there that has a successful app should start looking at secure enclaves and figuring out what user data needs to be public within our database and shared within our database and what does not. Because if it does not need to be shared for like mission critical things to function, you should be locking that down. You should be privatizing it, securing it. It's not only good for your users, it's good for your liabilities of business, and it's good for a competitive advantage for those other apps that are gonna come along that will be trying to be more secure and be more private.
[00:51:10] Shawn Yeager:
I wonder, Mark, if you have a perspective on what this does in terms of regulatory advantage or or to just minimize the attack surface, if if you will, that as companies and we see this with Apple, we see this with Google, we see this with all the large players, they're running this gauntlet of overlapping and competing regulatory regimes. So do you have a sense of what adopting secure enclaves, adopting Open Secret would do to one's exposure, let's say, to a lot of these, messy, sometimes overlapping and competing privacy and regulatory regimes?
[00:51:49] Marks:
That's Yeah. That that is a, a heavy question right there. First off, my my portion of this podcast is going over obscure VPN, so I'm gonna dox myself a little bit there. It's a great product. I love it. Fantastic. But I And second, you're not a lawyer. Yeah, exactly. I'm not a lawyer. So there's the other thing. I think that secure enclaves, they help you when it comes to regulation, because you can effectively be hands off, you know, in the, in the Bitcoin world and the digital currency world, we talk a lot about self self custody, right? You have your own keys, you secure your own coins. And so a business, if the users are, have their own keys, they don't have custody of their coins of their money. And so they, there are all sorts of regulations and liabilities that don't apply to them With open secret, we are bringing self custody to your data.
And so it's all the same things apply where the user is in control of their data. You as a, as a provider, don't actually have access. You don't have any control over what data that they generate and what they interact with and what they do with that data. So all sorts of liability no longer applies to you in that regard and regulations don't apply to you. Now on the flip side, I do wonder if, companies that are earliest to embrace your secure enclaves might be some kind of a target for coercion, you know, to come like Apple experienced with the enclaves on the phone. Maybe I do get a knock on my door that says there is this one specific user.
We need you to push an update to this product knowing that, it will actually break the product. So if somebody goes on with maple, if we push out a nefarious build that doesn't match with the open source version of maple, your client will not connect. It'll immediately sever the connection and you won't do anything. So you won't be compromised. However, you know, we could push a backdoor for government to get at one specific user if that user came and logged in. Now, but again, like it's, it requires that user to to to to go on the code. That that's it's a very detailed question. A very detailed answer. Right? I don't wanna dig in all the technicalities.
But we have set it up to make it that situation nearly impossible to happen. And so, but I I do wonder, you know, there is this threat that that somebody could come put pressure on us. And so we build all of this cryptography, we build all this encryption in there so that that scenario cannot play out. It's basically, I don't wanna say impossible, but it's it's as close to impossible as we can try and make it, so that we can protect ourselves from our users, and our users can be protected from us.
[00:54:21] Shawn Yeager:
It's an excellent point. You know, I came at this, from the standpoint, obviously, of what would it do perhaps to demonstrably, prove that, hey, I I can't help you. But then, little if anything stops these these government agencies from believing that you can build a backdoor for just them. Right? So Right. It's a great point. Well, Mark, I've been looking forward to this conversation and have just been delighted. Really appreciate it. For those, and I'll certainly get this into the show notes, who want to follow you, your work, Open Secret, Maple AI, where should they where should they look? Where should they follow you? Great. Likewise. I've really enjoyed this conversation. Was looking forward to it.
[00:55:01] Marks:
I'm on Twitter at Noster. My username on on x is marks underscore f t w for the win. And then on Noster, I'm just marks@primal.net. You can follow me there. And then we have Maple on all the places as well. On X, it's try maple.ai. On Nostr, it's just maple.ai@primal.net. And then Open Secret is there as well. Open Secret Cloud is where you'll find our handle. And then our website is available as well, opensecret.cloud. And then I recommend any developer that wants to try out Secure Enclaves or wants to try out Open Secret, first go to maple. Go to try maple.ai and see what it's like for an end user. And that is the light bulb moment. Because when you think about building encryption or doing any of these private key things, it's very daunting and you think it's gonna be convoluted and difficult to use. So just go to try maple.ai, log in and you won't notice that you're using end to end encryption. It's that easy. And so that's where I would really point people is just go create a free account, give it a try, and then you'll taste what it's like to have an app that is built on open secret.
[00:56:11] Shawn Yeager:
Terrific. Yep. I use it. Great product, smooth as can be, and I'm really excited to see iterations of it and what others build. Thanks, Mark. Appreciate the time. Talk to you soon. Okay. We'll talk to you. Bye.
How are you, sir?
[00:00:05] Marks:
Man, I'm doing great. Yeah? Yeah.
[00:00:07] Shawn Yeager:
How was Bitcoin takeover at South by Southwest last week? It was awesome.
[00:00:12] Marks:
Takeover was the culmination of, like, an entire week of Bitcoin content. And by the and I was speaking at Takeover. So, like, the whole week, I was kind of dreading you know, excited, but also dreading when you have a speaking engagement. Yeah. And so I was just getting more and more tired every late night at a Bitcoin event, and I'm like, alright. I gotta have energy for Friday. But, it ended up going really well. I think, the talk was good, but also the content from all the other speakers, was pretty solid.
[00:00:40] Shawn Yeager:
Well, fantastic. What was any any particular takeaways of last week's sessions or interactions? What what's what's the vibe?
[00:00:50] Marks:
Yeah. What is the vibe? I mean, people are obviously excited. I think one of the general vibes is that there has been a shift now where last year people were worried about being thrown in jail. And now they're worried if it's going to be Bitcoin only, or if, you know, alt altcoins are gonna make it into the reserve. So it's been quite a shift in in in the worries that we have on our plate and the arguments that we're having online.
[00:01:16] Shawn Yeager:
Absolutely. Yeah. I mean, it is an embarrassment of riches in some ways, and yet keep pushing. What was the attendance like? I mean, what did you at at commons, was it full house,
[00:01:28] Marks:
usual usual suspects, or some new faces, hopefully? Yeah. It was pretty packed. I think they were trying to max out the fire code, so I think we hit about 300 people. There was it was standing room only in there. A lot of the usual suspects, a lot of new faces as well. I chatted I chatted with one guy who was a stay at home dad for years, and this was his first time kind of venturing back out into the marketplace. His he and his wife are swapping. She's gonna be stay at home now. And so he was looking to get out and network with people and has an interest in Bitcoin. And so I thought that was awesome. Like somebody who's just kind of interested and and trying to network, great place to start.
[00:02:07] Shawn Yeager:
Terrific. Yeah. I think, you know, we it's at least, over the last few years, given my focus in lightning and payments, it's been tempting to want to hope that merchants and small business and, you know, but I think it it begins with the individual. It's all about the individual, so that's great to see. Yeah. Well, speaking of which, let's dive in. I'm sorry. Okay. I was thinking to myself this morning, that I well, I still have mutiny wallet installed on my iPhone. I have not yet been able to to to remove that. But but maybe starting there not to to rehash too much of that. I mean, I'll make sure to cover in the show notes some of that backdrop for those who are interested. But, I mean, it was was and is, as it lives on an open source, a bold experiment in, broadly speaking, user control and privacy.
And so I'd love to understand, you know, coming out of that and transitioning or transforming, if you will, into Open Secret, what did you take from? What informed the creation of Open Secret from the, foundations of of Mutiny Wallet.
[00:03:17] Marks:
Sure. Yeah. So I wasn't there on day one with the company, Mutiny, but I was there as a user on day one. I was following along. I was right with you, you know, downloading the app, first running in the web as a progressive web app. And then once they launched on the app store, had that as well. And it was my lightning wallet of choice. I was using it for everything. And really the I think the the goal was to take this really complex thing of lightning and help people run it easily on their mobile device. But then the biggest unlock was. Being able to synchronize that across all your devices because lightning has to be always on in order to accept payments. It has to be, you know, have to be running your own server and that's what made it so complicated. So you guys thought, let's just try and simplify it and then people can have it, you know, in their browser. And when they wanna accept payments, how do we do that? Well, they innovated some ways to do like blind tokens and use e cash to accept payments for you when you're offline. There was really cool stuff that was going on.
But, the difficult part was it's a difficult user experience. And one of the biggest drawbacks was having users manage their own private key. And so when I joined mutiny, the goal was let's take this to millions of users, hopefully billions of users, right. At some point in the distant future. But it was like this non starter where if you are going to just a regular average user who just downloads apps and uses them, and the first interaction is write down these 12 words. If you lose them, you're gonna lose all your money. Like that's that they just get scared of that and go away. So we were trying to figure out how to solve that problem. And we started looking at secure enclaves.
The mutiny guys were at the sovereign engineering cohort number one in Madera. And while we were there chatting with some people and learning about secure enclaves in the cloud and thought this is a really interesting technology, I wonder if we can utilize this. And so as we were looking at Mutiny and scaling it, then, you know, there was a there were there were a lot of variables at play there, but it turned out that as a company, we needed to kinda pivot away from from scaling Mutiny and, try to do something else. But one of the something else's we wanted to do was to build an e cash wallet that could scale to millions of users. And again, we kept running this problem of the UX, the user experience. Like, we have to win on user experience. And we looked at secure enclaves and said, this is how we do it. We can use Secure Enclaves to manage the private key for the user. They don't have to worry about it when they first get in, but at some point you can prompt them and say, Hey, it would be great if you downloaded your private key, save it as a backup, you know, but you can do that when you actually have developed a relationship with them rather than right off the bat.
And as we looked around, there was nothing out there for using secure enclaves for mobile apps. They are used primarily inside large organizations for securing, like, internal process controls. And so we thought, okay, if we are going to use secure enclaves, we need to build a platform to do that for ourselves. And it was at that point we realized, well, instead of having just one e cash wallet or one mutiny wallet that uses secure enclaves, why don't we open this up and hopefully have hundreds of apps that have the same privacy, the same security that mutiny wallet or, you know, or such could have? So that was a long a long ramble there a bit, but that's kind of how we got to where we are today. Absolutely. And I think maybe for those,
[00:06:40] Shawn Yeager:
who aren't familiar, I mean, I think of right or wrong. I I think of trusted execution environments. I think of early Intel chip architecture or the what is it? ME engine. I forget some of these. But, give us will will you, mark a primer on secure enclaves? What what's what is it in a nutshell? What what is it, as a technology, and what does it seek to solve, or what does it, in fact, solve? Yeah.
[00:07:10] Marks:
So a lot of phrases you might hear. You mentioned trusted execution environments, TEEs, secure enclaves, secure elements. You might hear confidential computing. A lot of these are all referring to roughly the same thing. And that is, if you look at your phone, right, you have your phone in your pocket. It has a secure enclave and they've been in there for over a decade. And it is a secure chip where you can stick code inside of this chip and the code is running, but then the data that it talks to, you cannot see. So this data is encrypted before it enters into the chip. And once it goes into the chip, only the chip can decrypt it and then run the code against that. And then when it spits it out the other end, it is re encrypted again. And so no human has the ability to intercept that, that data in a plain text state. So effectively, that's all it is on, on your device. It is securing your thumbprint, your face ID, your wallet, driver's license, that kind of stuff.
But then they've moved into the cloud now. So Intel was one of the first with Intel SGX. If you do any research, you'll find that there were some vulnerabilities with that. That was like the older generation. Now they've moved on to the Intel TDX. Amazon has one, they call AWS nitro and video has their own that they're doing on top of GPU's. And then, Amazon recently announced their own GPU thing just, you know, like very recently. And I'm curious to see what they're gonna do as far as encrypted GPU's for theirs as well. So there's a lot going on here, but effectively you can just think of it as the, the analogy I like to use is if you're in the kitchen and you are baking your favorite cookies, right? Well, you're gonna get a recipe and I'm gonna give you a recipe to follow.
And, and that recipe is like the code that you would put inside the secure, the secure enclave. So you take my recipe, but I'm not there in the kitchen with you watching you actually bake. And so I don't see which kinds of ingredients you use. I don't see which kind of flour you use, what kind of chocolate chips or, you know, which kind of milk you use. So those that data that you put in there is private to you. But the recipe that is being used is open. It's out on the internet. Everybody can see which recipe is being followed. So it's analogy that seems to kind of resonate with people that, that really the secure element, the secure enclave is, is a verifiable code that's out in the open for everybody to audit, but the data that passes through cannot be seen by anyone.
[00:09:35] Shawn Yeager:
And I would imagine you have run through a few dozen of these metaphors to lock into one because it is complex. It is challenging Yeah. Yeah. To get that across, but great job. I like it. And as you say, Mark, we've we've been carrying these, pieces of hardware on our phones at least for a decade. What have been some of the most notable breakthroughs? You know, and I'll I'll throw in that, correct me if I'm wrong, I believe wallets, the digital wallets that we carry. I'm I'm an iPhone carrier and person, so I am not as deeply familiar with Android, but I believe it's safe to say on both sides, differential privacy that I know Apple has put at the center, be it in in the way it it, applies AI to photos and the like. Can you can you paint a picture there of of, you know, what has it brought us, let's say, over the last five to ten years? Sure. I think that they're getting the benefit of, and that is these, ephemeral credit card numbers.
[00:10:36] Marks:
So it used to be I don't know how often you've had to change your credit card number because you had some kinda charge that happened, But, it's happened to me multiple times. And so I'd call up the credit card company and say this charge was not mine and they would have to give me an all new number. Well now, with the, the, the wallet that is on there secured by the Enclave, Apple's able to generate a new credit card number for you per transaction per per vendor. And so I think that's a big benefit. So now if one is compromised, they don't have to change your entire credit card number over. Another very notable case that was on the news, when was this, like 2015, '20 '17, was the, the shooter incident in San Bernardino, California, where there was a person who, you know, unfortunately was engaged in, in a shooting event and the FBI wanted apple to unlock the phone for them. They couldn't get in, they couldn't get in through the pin. The secure enclave was securing the identity on the phone and apple stood their ground and said, we're not going to compromise the device. Basically the FBI wanted them to shift, to to to ship, modified firmware to the phone that would unlock the secure enclave.
And Apple stood the ground and said, We're not gonna unlock the enclave because that creates a backdoor that's vulnerable for anyone to get into. And so this, this was a test of the enclave showing that even the FBI, like, was having a hard time getting in and finding this information. I hesitate to use that as an example because it's such a tragic event. But it was one to show that that the enclaves are, you know, are are really secure in that regard.
[00:12:11] Shawn Yeager:
So and I think, you know, as we could spend a lot of time talking about it is whether we look to The UK with them now trying to yet again force a backdoor on Apple devices and Apple, deactivating their their iCloud, end to end security and response. It is those extreme edge cases that test either the, viability of the technology or in this case, perhaps the commitment of a technology company to to, use it for for the good of their customers and and not create what we know that does not work, which is a backdoor for one, but not for all. So so from from what we've seen secure enclaves enable, what remains?
Or rather, what are some of the more egregious challenges, problems, risks to trust and privacy that we endure that you foresee secure enclaves addressing? And certainly, we'll talk about Maple AI.
[00:13:10] Marks:
Sure. Man, there are so many, so really it's, it's kind of this, this upgrade for security on the internet that I think that we are about to embark upon. HTTP, everybody types in the URL, they go HTTP, you know, colon slash slash w w whatever. And that was all just out in the plane and it was insecure. So then when all of our banks started coming online and we started having usernames and passwords, we realized we needed this to be secure. So we upgraded the entire internet to HTTPS. It took over a decade to get everybody on board, but we eventually made it. Now pretty much everything goes secure.
However, that only secures data in transit. Right? So you're using your password. If you're on a public wifi trying to log into your bank, it it is relatively safe to do that. However, once your data gets in the back end of that bank, or let's just talk about some random app you have on your phone, you're doing you're going on a run around your neighborhood and tracking your location constantly, you don't really know who that developer is. You don't know who they hired at their company. You don't know who they recently fired at the company who might have taken data from their servers. And so you're trusting them with your daily location, where you go on a run that could leave you vulnerable.
And that data is not encrypted to you. It is open to them. They can share that with advertisers. They can share that with the government. Data hackers can get in and steal that. So it, it is this privacy trade off that we don't quite think through a lot. We don't realize that we're doing that, but if you look at every app on your phone, you're effectively leaking your data to unknown people all the time. And so I think we are going to witness a third upgrade now to the internet, and that is secure enclaves. I'm I'm calling it HTTP S E for secure enclaves, but effectively, I think every app needs to start using enclaves to not only protect user privacy, right? Where you can encrypt every single user individually, but it also protects the developer and their liability.
In 2024, we did some research on, on data hacks that happen, data breaches. And the average cost to a US company when there was a data breach was about $9,000,000 In 2023, it was lower. It was like in the fours or fives. But that's a significant liability that you're hanging onto as an app provider if you are custodying custody, that's a hard word, if you're taking custody of user personal identifiable information. And so we think that secure enclaves are gonna be a way to to protect businesses and users.
[00:15:52] Shawn Yeager:
And I think I was listening. I was after a walk earlier today listening to a conversation. I believe it was Kale. And, I don't know the other. It was just one of these, you know, you grab fountain podcast app and do a search and and and hit play. And it was about certainly a a conversation I'm hearing happen more often, which is from don't be evil to can't be evil and from personal data as an asset as the new oil goes the cliche to an absolute liability. So, you know, collect it all to collect less to collect none. Talk to me a bit about that. I mean, where could it go? Where are you seeing some of the more progressive early adopter, you know, customers, companies, start to poke around in terms of their ability to collect less or collect none?
[00:16:47] Marks:
I like the I like that phrase change from don't be evil to can't be evil. Cause if I can be evil in any way, like if that's a possibility, then there's the possibility of me getting a knock on my door at 03:00 in the morning from, from the government, to unlock someone's account. And I just don't wanna have that liability on me. I've got a family, I've got a life to live. So I would rather create an awesome user experience for our users without, knowing what they're doing effectively.
[00:17:16] Shawn Yeager:
Is it overly hopeful to think that there is a trend in that direction from collect it all to at least collect as little as possible? And I in America, perhaps other other places, it's different. But I think I I carve health care aside, you know, the HIPAA, act and and some of the related legislation at least creates the facade. I don't know that they're they're a sterling example of protecting, you know, PII or personal health information. So there or elsewhere, are are you seeing a trend or are we very, very early and and certainly Open Secret is is looking to push those boundaries? So I think we're early in many regards.
[00:18:00] Marks:
Before we decided to pivot into Open Secret, we had meetings with a handful of developers, call it 10 different developers, from just the regular mobile app space. I come from mobile apps. I did, I worked at a few different startups as an early employee. And so I've been doing mobile development since really before the app store even launched. And so we had a lot of conversations with people and it was about fifty, fifty, Half of them were on board with the idea that they wanted to provide users their privacy and also protect themselves from liability. The other half told us, Yeah, I'm not super interested in that. Part of it is that they actually like the ability to monetize their user data.
And so that's, that's been a big part of the business model for so long. Advertising, user data monetization, you talk about data being the, the new oil. So I think it's gonna take time to migrate away from that. And there's also, there could be a hybrid approach where you actually, and this was a couple of the developers who talked to said they would like to take this approach where you lock down the most sensitive information, but then you still can share non PII within the system and provide trends and analysis and maybe do the advertising model on top of that. But you do create this separation, which is extremely important. And so that's really where we are trying to start is that if you're an app developer starting out building an app today, you don't even, most of them don't even think about privacy. They don't look long term to say what's gonna happen if I have all this user data, they're just concerned with, can I build this user experience and will I get downloads, right? SQL and a Docker container and just start grabbing everything.
Yeah. When you look at tools like Replit or Bolt dot fun where people are just typing in prompts now, and they're just vibe coding into an app. And then they put it out there and ask people to start using it. That's what they're, that's what they're concerned with right now. And they don't think about the fact that if this thing goes viral and takes off, they might suddenly have a database full of this information that they really shouldn't be holding onto. And so we would like to create that same experience, but have privacy turned on by default, have encryption turned on by default. So when they Yolo into something and vibe code and all the buzzwords you wanna say, right? Like when they, when they get into it, they are protected from day one and their users are protected from day one. There's no reason why it shouldn't be that way. The only reason is that hasn't been built yet. And so that's why we're building it. It's it's
[00:20:26] Shawn Yeager:
the foot guns are are there waiting, I think, when you when you take that approach. Well, I think that's that's a great segue mark into your sister company or or product at least Maple AI. You'll correct me on that. It is no surprise certainly to me that LLMs and AI where you began because of the sheer volume of data, the uptake utilization. I spend probably an hour a day, in in various AI products. So how does Maple AI make these interactions more trustworthy, if that is the right word, than what's out there today, be it a Grok and OpenAI or others? What's what's fundamentally different?
And and second, I'd love to know your take on today, at least, what are we sacrificing to get those benefits that in this case, Maple Eye Maple AI and Open Secret deliver? Mhmm. What's the trade off?
[00:21:26] Marks:
Well, Maple Maple was really born out of those developer conversations we had when we were pitching open secret to developers. Pretty much a % of them, close to a % said, hey, if you had some kind of AI app where I could chat privately with AI, I'd be all over that. And so that's why we decided to build that. It was, it was a way to figure out if open secret is even possible, because we didn't wanna build a platform for developers before understanding if the UX was even possible to accomplish. We wanted that rock solid UX of download an app, log in with email, log in with, you know, whatever OAuth, Google, Twitter, or what have you. And now you're in with a private key and end to end encrypted.
And so Maple was our way to prove that. And, I mean, I'm really happy that we did because it's a phenomenal app. It's really easy to use and it, it helps with those trade offs that you're alluding to here and that you're asking about. And that is when you go use ChatGPT or Grok, they're amazing tools are phenomenal. They do a lot of great things. They have a lot of great functionality, a lot of functionality we don't have in maple yet, but the trade off you're making is you are sacrificing your privacy to the companies. So as you type in your information, everything you type is, is stored permanently on their servers and then used to train their models in the future.
Now, for some people, I mean, so many people I'm sure you've run across this. So many people you talk to, they they don't see that as a problem. Right? I don't have anything to hide. Why do I, why should I care if I'm giving up my privacy? But our privacy is really important for maintaining open societies, for maintaining open markets, for maintaining this freedoms, these freedoms that we enjoy. Because when we give up our privacy, we allow authority. We'll, you know, we allow people to come in to start inserting themselves in the process, and they can create roadblocks. They can create censorship. They can shut us down. So we need to maintain that barrier.
[00:23:20] Shawn Yeager:
Now from the phrase, the the right to selectively disclose. Mhmm. I don't know if that was Snowden or far, you know, long before him, but that's the phrase that sticks with me is even if you presume you have nothing to hide, it is maintaining defending the right for selective disclosure.
[00:23:38] Marks:
Yeah. It's a it's a great quote. So that's that's really where we're at with with AI and with Maple. And from a practical standpoint, the more that you tell to these other services, the more that it can be used in the future, which can be convenient, but then you have to realize you don't know who is at that company again, right? Open AI hires dozens of people and employees. So many individuals, and you don't know who is accessing your data. And it might be very personal information conversations that you might have just between you and a family member, right? You and your SIM significant other, you are now sharing with, with random people.
And so maple AI gives you that ability and, and high level, what maple AI does is it creates a private key for you in the secure enclave. It encrypts your chat that you make on your device, and then it sends it to our servers. And then not only do we have that encryption happening in the, in the enclave, but we also have a second one, which is encryption on the GPU. So we have the open source large language model booted up in the CPU, in the GPU. And so your request is passed along encrypted to the GPU as well. So nobody is able to see what is being chatted about. The GPU comes up with a response with the LLM, sends it back, encrypted back to your device. So the whole pipeline is encrypted, which means that we have no censorship in there. We don't sanitize anything. We just give you a raw open source model and let you talk to it. Now it is based on Meta's llama 3.3 at the moment. We want to bring other ones online soon. We're we're working through that. We actually currently have an LLM provider that we use.
But because they use GPUs that are encrypted, we can, we can verify the whole process and know that they are end to end encrypted as well. So we're looking at how we spin up our own encrypted models. But, but yeah, you, the, the only kind of censorship you might get is the model itself, the data that it was trained on, right? The biases, the weights that it has in it that might tilt one way or another politically. But we do our best to we wanna find models that are the least, you know, politically motivated as possible, and then we don't do anything to insert ourselves in the middle of that process.
[00:25:58] Shawn Yeager:
And what I mean, I I when I initially signed up for Maple AI, I used I think I used GitHub, authentication, and it was absolutely seamless. There was there was absolutely nothing about it that felt different other than understanding a bit more about what was going on behind the scenes. Talk more about those well, perhaps the trade off is in the model and what's required. I mean, I you know, I'll I'll shamelessly, sort of say that grok is my preferred at this point. It's just mind blowing for various reasons. So, you know, where is the line, I guess, in terms of what's possible with open secret and and secure enclaves?
And where is your place in that ecosystem as you see it today?
[00:26:44] Marks:
So I I'm definitely not going to be building my own data center and doing all the cool stuff that Elon's doing. I think it's it's it's top notch what they're trying to accomplish. So for us, I, I don't look at maple AI as like the only AI tool that people should have. There are so many tools out there. And so it should be one tool in your toolbox. You know, if you are trying to fix something in your house, you're gonna go out in your garage. You're gonna open up your drawer. You're open up your box. And there are 20 or 30 different tools in there, and you need a different tool for a different situation. And so I use Grok all the time. I use ChatGPT.
I use Venice AI. I use a lot of other AI services for different purposes. But I'm so glad that I have Maple there as well, because when there are chats, things that I want that are just should be more private, then I have that tool to use. And so Maple, it actually has a very long roadmap. We have a long strategy for Maple. It, it started, you know, it was a glint in our eye as a proof of concept, but then very quickly we realized there's so much more here. So we are going to continue adding features into maple. It is going to become more feature rich. It will be someplace that can sit side by side with chat GBT and some of these other services.
But then Maple is also a service for other app developers. And so it plays into this whole ecosystem of Open Secret. Let's go back to that, that running app that you might be using. So you're tracking your runs around your neighborhood. Well, if that developer is built on Open Secret, they're using it to secure your location data. So now your location data is private, but what if they want to use AI to help you make decisions about like, how can you have a better running schedule? Maybe you want to run a marathon. It's like, okay, based on how your running history has gone, let me use AI to, to build you out a plan to get there. The current option right now is for them to grab an API key from someone like ChatGPT.
And now they have to break that privacy wall. They have to, to, to take your private, really sensitive information and go give it to OpenAI. And that is not a great, that is not a great compromise to make. And so Maple is actually gonna be part of the it is part of the open secret ecosystem. So they can now have a private AI to chat with, to bring within their app and stay within that privacy ecosystem. So we see Maple as both a really solid consumer app and business app. We have a lot of business users on Maple already. So that we will continue to iterate there. And then as that gets stronger, it is going to be stronger for the developers who are using private AI and vice versa. When developers ask us for things that they want to do in the private AI space, that will go into Maple AI as an end user product, and both will get better at the same time.
[00:29:28] Shawn Yeager:
Terrific. And that makes a ton of sense. In fact, you touched on one of the questions I wanted to ask, which is, it seems to me, I'm no expert here, that in enterprise use, certainly OpenAI and others, and, you know, not to not to speak ill of any of them, but they've got their team licenses every time I sign into one of these products. They they wanna push me to upgrade the team to a team product or team license. That to me seems particularly problem some in terms of rags or otherwise training these models on intellectual property that is, you know, belongs to the company, sensitive documents, whatever that case may be. Talk to me a bit about what that looks like. And here too. I mean, you know, are are you bullish on on enterprise uptake, or, you know, do they suffer some of the same challenges collectively that individuals do, which is, it's not a problem. We'll just give it to OpenAI.
[00:30:24] Marks:
If you if you go look around on Reddit and some of the forums, you'll find countless people saying that they have no problem sharing their company information with open AI, which is unfortunate. Right? That's just the nature of it. So some people don't realize, like, it's not a they they they just think it's not a big deal. I would question whether those were officers of the company hanging out on Reddit saying those things. But, yeah, probably not. Probably someone who, you know, doesn't get perp walked out the front door if they if they, break a reg. Yeah. That's right. And I can understand where they're coming from. Right? Maybe you're running some small business in a town, and and if you are thinking that sharing information with ChatGPT, that the threat is ChatGPT is gonna come take over your business, Like that's, that's the wrong threat vector to be thinking about really what it is is that you are training chat GPTs models to benefit your competitor.
That's in the same town as you, because they're going to go to chat GPT and start asking questions about, Hey, how do I spin up this thing? How do I do a marketing campaign in this market? And if you were using it six months prior or a year prior, your data is now part of that, that brain that is going to make recommendations to your competitors. So you're making the system smarter to not only benefit you, but benefit those around you. And as a business person, you, you need to make that call, Right. Do you want to improve it for your competitors or not? Whereas with something like maple, you can, you can bring your chats in there. We're adding document upload soon. So you'll be able to upload your documents and do kind of your own rag type stuff.
And you'll be able to go over strategy, marketing plans, customer service ideas, right? Documentation. There'll be all sorts of things that you can do within maple that won't be shared with anybody. They will stay local to you and private to you. And that's, that's really a big competitor advantage for business users is that they can have this AI that they, they, they bring into their strategy room. Right? You bring the smartest people in your room, you close the door and you have a conversation. Well, now you can bring Maple in there too, and trust that, and you can verify. You don't have to trust us. You can verify cryptographically that that your information is not being spilled out anywhere.
[00:32:37] Shawn Yeager:
And is there I I think I know the answer to this, but I'll ask it. Are there clear trade offs in the sense that what you just described, which is fantastic, sounds like I can have my cake and eat it too. I can benefit from models trained on this massive corpus of of of data and information across the Internet. You know, Grok presumably excels because it has real time access to to x. So I can take advantage of of that baseline, augment through my own documents, my own data. Are there obvious, or do you foresee places where it breaks and where, you know, the the YOLO approach of an OpenAI or whoever, outpaces those who've who've elected to apply your similar technology. So are there are there trade offs that are apparent? The AI
[00:33:31] Marks:
race right now is very exhausting. If you're online at all, every week there are new advances happening. So I definitely feel the, the anxiety and the struggle of like, if we're going to have a product like maple AI, we have to be able to keep up in some regard with all the advances that grok and open AI are making deep seek, Mistral there. Everybody's throwing out new stuff constantly and doing great work. I think what really what we need to do is we need to provide a base level of features for our users and do it in a way that is private.
And then we kinda have the benefit of also building Open Secret that has Maple built into it. And we would be extremely fine and happy if a developer came along and built a, a maple competitor on top of open secret. I would have no problem with that because the, the technology's all there. The tools are all there for them to do that. And if they wanna go after a market, like if they wanna do image generation, for example, we don't do image gen inside of maple. There are reasons why we haven't done that yet. We might do it in the future, but if somebody wants to come build that on top of open secret, by all means have at it, make it an, a phenomenal business for yourself, scale it to millions of users. That'd be great. And so really, we we're looking at as we want to use maple ourselves. I use it daily. I use it for my own. Like I have a personal assistant prompt that I use to to prioritize my day. I use it for doing strategy, for doing marketing, all sorts of things. And so I wanna continue to use it, and we will continue to keep maple usable for business use cases As long as there isn't another competitor that is as private. If, if, if somebody builds something as private and may as maple and is more full featured, then at that point we start to look at the cost benefit of maintaining maple or, you know, winding it down. But I don't think, I don't think the, private AI part of open secret will ever go away. That's going to be a mainstay of it. And so there will always be some really strong private AI functionality in open secret. And there will always be end user apps on open secret that provide private AI. Don't know if that answers your question, but that's It does. Absolutely. The thought. And it's it's and it seems to me that I have these conversations often, particularly with friends, colleagues who are developers
[00:35:54] Shawn Yeager:
who push back, you know, and they've they've heard the examples of just half baked code that cursor or something, you know, not to pick on someone, but has spit out. And they've got to now spend 10 x the time trying to debug it or get it back up and running. But I think it's inevitable. And so in my mind, LLMs, AI broadly, agents are now a foundational element of any sort of, system. And so I absolutely hear you there. What comes next? Know, if you're willing to give us a glimpse of of what's next in the roadmap or, you know, if you had your way, who would come knocking on open secrets door and what would they be looking to build?
[00:36:39] Marks:
For me, the the lowest hanging fruit app ideas out there are note taking apps, journal apps, people who want to write down their thoughts, right. Their, their daily thoughts or, or their internal struggles. These are things that are very personal and private. I think those are going to be the first apps that that could be written.
[00:36:59] Shawn Yeager:
And then you plug in. Yeah. Obsidian.
[00:37:03] Marks:
One of the apps we talked to was,
[00:37:07] Shawn Yeager:
day one, day one, right? You're just gonna be a heavy user. And when they got when they got dicey on their data policy, I stopped.
[00:37:15] Marks:
Yeah. Right? And they did build private end to end encryption into it. Right? It took them a long time. I was good friends with those guys. I was actually at at Apple's developer conference when they won their first apple design award. I gotta hold it, take a picture while it was fun. Right? So they they're they're great guys. But we chatted with their main developer. He's not with them anymore. And he said if open secret was around when they were building it, it would have taken, you know, like a matter of weeks, maybe a couple months total to implement it. Whereas it took them a year and a half because they had to build it for iOS. They had to build it for Android and for web, and they were figuring it out as they went. Whereas open secret is just build it once, run it everywhere and you get really strong encryption.
So I think those are the first apps and obviously AI is a big part of it. So there's gonna be a proliferation of AI apps. And then you mentioned AI agents. That's really where we're headed. There's, you know, AI for, for those listening. If you, if you've heard AI agents, but you're still confused, what the heck are they? They're effectively AI bots. They are specialized AI that know how to do a task. So if you are hungry for dinner that night, right, you would go into your main AI, call it Chat GPT or maple. You go and say, I'm hungry. Get me some food.
Chat GPT doesn't know how to like fulfill your wishes there. All it can do is say, here, let me give you the steps on how to figure out what you wanna eat, and then here are the steps on how to order food on DoorDash. And that's where it stops. But with agents, ChachiPT or Maple or some other, AI will be able to say, okay, here's what you want to eat. And now let me spin up an agent that knows how to go find restaurants in your area. And then it's going to spin up another agent that knows how to order food on DoorDash. And so it will go spin those up, but those agents are not free. Right. They cost compute credits, power, electricity, all that stuff. And so we are going to need to have a way to pay all of them. And that's where e cash comes in. You know, you have lightning, you have Bitcoin, you have stable coins, you have credit cards that are trying to do this. Don't think credit cards are gonna be able to shoehorn themselves into this new paradigm of of AI pain ease pain each other because we're talking about three and a quarter points per transaction plus 30¢ sat down. Yeah. Yeah. Exactly. And we're talking about, like, fractions of a penny per transaction, maybe a few cents, really. It's a few sats is, is the term that we should be throwing around. And so what we need is some kind of wallet that's attached to your AI that can then pay other AI agents and you just you authorize it. You say, okay. Here's your budget. You got $50 in here, and now just use it up as you need to, and then I'll replenish you in the future.
And I'm I'm hopeful and bullish that ECash is gonna be that solution. I think it was built for this,
[00:40:00] Shawn Yeager:
and, it satisfies a real need. And so Let me if I may, Mark, let me pause you there. I mean, I think on the topic of trust, and I'm I'm dating myself, but I think back to the nineties, the beginning of my career at Microsoft, and we were we were professing intelligent agents. You know, you stick around long enough, these things always come full circle. But now as then, I think it comes down to what will it take for me to trust to let this thing loose? You know, even even I've got a computer science degree, been in tech for my entire career. I pretty deeply understand the technology. I still don't know that I know what it will take for me. What do I need to qualify and quantify giving this thing a a wallet of budget, which, you know, on Nostra Nostra Wallet Connect, I can set budgets. All these things are great. I understand that. But I mean, that's sort of the crux of it, I suppose, is how far away are we, do you think, from this being in the wild?
And what is your take on how Secure Enclave's Open Secret specifically addresses that gap, that trust gap of, okay. Yes. I'm gonna give it a leash and let it run. Yeah.
[00:41:13] Marks:
There, in my mind as I'm listening to you, I think you are describing two different definitions of trust there. So the first one is, do you trust it to be effective? Right. Do you trust to, to let these AIs go and do the thing that you want? And so if you want to eat food for dinner that night, do you trust that it's actually gonna bring you food that you like, or is it gonna bring you some styrofoam that you don't wanna eat at all? So that, and that is something that we can just play out through the user experience and trial and error and make sure that we, we trust that that's gonna happen. Then there's the deeper version of trust of, do I actually think that it's going to be benevolent and it's gonna act on my behalf, that it's not gonna have bugs that will suddenly spend my entire wallet in one swoop.
[00:41:58] Shawn Yeager:
And I think And I think hallucination in the in the in the age of LLMs is perhaps for me at least reinjected that concern. You know, you see how far off the track these things can go.
[00:42:10] Marks:
Yeah, definitely. And I think agents hallucination is definitely a big part of chat. And I think agents have an opportunity to be more transactional where they can write unit tests and they can, they can prove things and say, if the input is this, make sure the output is this and don't stray from that. And so I think we might be able to have more provable trust in, in that kind of design. But, you know, you mentioned working at Microsoft. I used to work at Apple when you're in these big corporations, you have these, thresholds of payments that can be approved. Right? So all these different managers you have above you, they each have their own budget level, right? Discretionary spending. They can approve up to $250 The next person can approve up to $250,000 or something.
I think that we probably end up in some kind of space like that. And you're seeing that with Nostril Wallet Connect, being able to set a budget for different apps. So maybe you say, right, like, I'm I'm okay with you making decisions up to this point on your own. If it needs to go above that, come back for me for approval, whether that's a push notification that shows up on your phone. I mean, how dope would that be if you're, like, sitting in a movie or sitting somewhere at a at a ballgame and your phone says, oh, hey. Your your AI agent wants to do this really important thing with for you. Do you approve it? And you're like, oh, yeah. For sure. Go for it. Absolutely. Maybe it's negotiating your car insurance for you. And it's like, hey. I just found that I can save you $500 a year by switching to this new one. Do you authorize me to do it? Yes. Yes. I do. Yes. I would say yes to that. Maybe it gives me a little quick report right there on the screen and I can say yes to that. So. I I think we need to figure out how to, how do we set up these guardrails so that we do feel comfortable, you know, at that level of trust. And then with open secret, what we're doing is we are putting the open source code out there.
This is the code that runs inside the enclaves, and then we have reproducible builds. So any user that wants to can go out, download the code, run the build, and they get a check sum. They get this number that says this is, you know, this is the fingerprint of the build. And then when you go log into Maple, you actually see the fingerprint that the server is doing. It's called attestation. And you as a user can compare those two fingerprints together and verify that they are correct. Now the software does that for you. It gives you a green verified badge, a little check mark, but users are able to do that. And to me, that's kind of the final piece to this trust that, that you're asking about. Is we need to see the code. We need to know what's running on these servers, running in the secure enclaves and with companies like Apple, with their private cloud compute and their Apple intelligence, they're using secure enclaves. However, Apple doesn't wanna open source all of that. And so they're going to third party auditors, and we have to trust the auditors are looking at the code and verifying it. It's the same when same thing with Signal. Signal has third party auditors that come look at their code. And so if you're gonna use signal to chat, you are doing some trust, right? Because you download the app from the app store.
You can't verify even though they even though signal publishes their code open source, you can't verify that the build you're downloading is the exact same code that's that's out on the open source. You you can hope, but now you're trusting the third party auditors that they are saying, yeah, we checked out this build. It is the same code. So there is a, there's a level of trust there. And with Open Seeker, we're trying to get as close to the user as possible with that trust. We wanna bring the code and the user as close together. So that's, that's really what we're going after. And I see there are gonna be obviously companies do a different way and different ways, and you have to decide as user for yourself, what is your what is your risk, model that you have? How much can you trust someone, and how much do you need to have more eyes on the code yourself? And I think that that brings to mind I mean, I I wonder,
[00:45:53] Shawn Yeager:
Mark, if you and the team didn't have a lot of interesting conversations about business model and monetization versus don't trust verify. Was that non trivial, or was it clear from the beginning that you were gonna push it as close to the user and and have as much visibility and transparency as you've laid out? Because that does seem to me to be at least the the reason, if not excuse, that a lot of established companies hold up.
[00:46:22] Marks:
Mhmm. This definitely was a big part of the conversation. We we very strongly feel like we need to have our code open source. However, we're trying to build a business, especially a SaaS business. And it's a threat right. To where your software can be copied and someone can spin up a competitor to you. So we looked at the licenses out there. This was, this was controversial within our company, in our discussions of which license do we go with. We eventually went with a GPL for the, the secure enclave code, maple itself, the client that is not, that is MIT. People are welcome to fork that and do their own commercial products if they want to, and they don't have to contribute back. But we did H E P a GPL for a few reasons.
One of them was we are trying to build a business. And so if people are going to copy our code and are going to improve it, we would love for those improvements to come back to us and for there not to be just like a straight competitor right off the bat when we're so small. Another reason is we feel very strongly that secure enclaves to be effective, need to have their code open source and reproducible. And so if someone's going to take the, the, the open secret enclave code, we want them to publish it open source as well for the protection of the users. If, if our code is gonna be securing people's stuff, even if it's on another person's device, another person's server, we want it to be out in the open so that users can be protected.
So that's that's really the approach that we took. It was not an easy decision to make because we wanted to provide the the strongest privacy possible, but that's that's eventually where we landed. And I I think that takes us to
[00:48:05] Shawn Yeager:
the last question I'd love to to discuss or raise with you, Mark, which is speaking to business leaders, be they heads of product, heads of engineering, CXOs, in a future where AI certainly, LLMs, I think we can we can foresee that become ubiquitous, other compute intensive and increasingly data hungry applications become ubiquitous. What changes do those incumbent players need to start deliberating and deciding on in order to to keep pace and presumably to make the decisions that that I think you and I would agree are best for the user and provide the trust that that they'll demand. Trust and privacy
[00:48:55] Marks:
have been a nice to have for too long. For decades, they've been a nice to have because in order to build effective cloud environments and to monetize a SaaS product, you, you almost needed to throw privacy out the window to really make it effective. The technology has finally caught up with things like secure enclaves and with devices having so much power on them, you can run LLM on a mobile phone now pretty effectively. It won't get you everything. The models that we run-in the cloud are still way more powerful, but you can get, enough sometimes of what you need on there. So the computing has caught up to this and now we are going to see users caring more about privacy.
You look at the political landscape and just what has happened in the last few months, more people are talking about privacy and and trust and what does that mean for themselves. I'm not talking just in America. I'm talking, you know, all over the world. Europe is a big example of that where privacy is is really hanging. It's a threat. You're attacking that continuously. Yeah, definitely. So I think if you are at, if you're at a company, if you have an app that's being used by a lot of people, you need to be watching for a competitor come out that has the the trust and the privacy angle that you don't have? And how can you adapt to that? With open secret, we don't require people to throw away all their code. We can plug into an existing tech stack.
So that, that was really important to us that we don't require people to start from scratch. So you could come, come to talk to us and we can help you start to, you know, secure just pieces of your data. But you don't have to, right? You can download our code and do your own secure enclaves. But I think that every, every company out there that has a successful app should start looking at secure enclaves and figuring out what user data needs to be public within our database and shared within our database and what does not. Because if it does not need to be shared for like mission critical things to function, you should be locking that down. You should be privatizing it, securing it. It's not only good for your users, it's good for your liabilities of business, and it's good for a competitive advantage for those other apps that are gonna come along that will be trying to be more secure and be more private.
[00:51:10] Shawn Yeager:
I wonder, Mark, if you have a perspective on what this does in terms of regulatory advantage or or to just minimize the attack surface, if if you will, that as companies and we see this with Apple, we see this with Google, we see this with all the large players, they're running this gauntlet of overlapping and competing regulatory regimes. So do you have a sense of what adopting secure enclaves, adopting Open Secret would do to one's exposure, let's say, to a lot of these, messy, sometimes overlapping and competing privacy and regulatory regimes?
[00:51:49] Marks:
That's Yeah. That that is a, a heavy question right there. First off, my my portion of this podcast is going over obscure VPN, so I'm gonna dox myself a little bit there. It's a great product. I love it. Fantastic. But I And second, you're not a lawyer. Yeah, exactly. I'm not a lawyer. So there's the other thing. I think that secure enclaves, they help you when it comes to regulation, because you can effectively be hands off, you know, in the, in the Bitcoin world and the digital currency world, we talk a lot about self self custody, right? You have your own keys, you secure your own coins. And so a business, if the users are, have their own keys, they don't have custody of their coins of their money. And so they, there are all sorts of regulations and liabilities that don't apply to them With open secret, we are bringing self custody to your data.
And so it's all the same things apply where the user is in control of their data. You as a, as a provider, don't actually have access. You don't have any control over what data that they generate and what they interact with and what they do with that data. So all sorts of liability no longer applies to you in that regard and regulations don't apply to you. Now on the flip side, I do wonder if, companies that are earliest to embrace your secure enclaves might be some kind of a target for coercion, you know, to come like Apple experienced with the enclaves on the phone. Maybe I do get a knock on my door that says there is this one specific user.
We need you to push an update to this product knowing that, it will actually break the product. So if somebody goes on with maple, if we push out a nefarious build that doesn't match with the open source version of maple, your client will not connect. It'll immediately sever the connection and you won't do anything. So you won't be compromised. However, you know, we could push a backdoor for government to get at one specific user if that user came and logged in. Now, but again, like it's, it requires that user to to to to go on the code. That that's it's a very detailed question. A very detailed answer. Right? I don't wanna dig in all the technicalities.
But we have set it up to make it that situation nearly impossible to happen. And so, but I I do wonder, you know, there is this threat that that somebody could come put pressure on us. And so we build all of this cryptography, we build all this encryption in there so that that scenario cannot play out. It's basically, I don't wanna say impossible, but it's it's as close to impossible as we can try and make it, so that we can protect ourselves from our users, and our users can be protected from us.
[00:54:21] Shawn Yeager:
It's an excellent point. You know, I came at this, from the standpoint, obviously, of what would it do perhaps to demonstrably, prove that, hey, I I can't help you. But then, little if anything stops these these government agencies from believing that you can build a backdoor for just them. Right? So Right. It's a great point. Well, Mark, I've been looking forward to this conversation and have just been delighted. Really appreciate it. For those, and I'll certainly get this into the show notes, who want to follow you, your work, Open Secret, Maple AI, where should they where should they look? Where should they follow you? Great. Likewise. I've really enjoyed this conversation. Was looking forward to it.
[00:55:01] Marks:
I'm on Twitter at Noster. My username on on x is marks underscore f t w for the win. And then on Noster, I'm just marks@primal.net. You can follow me there. And then we have Maple on all the places as well. On X, it's try maple.ai. On Nostr, it's just maple.ai@primal.net. And then Open Secret is there as well. Open Secret Cloud is where you'll find our handle. And then our website is available as well, opensecret.cloud. And then I recommend any developer that wants to try out Secure Enclaves or wants to try out Open Secret, first go to maple. Go to try maple.ai and see what it's like for an end user. And that is the light bulb moment. Because when you think about building encryption or doing any of these private key things, it's very daunting and you think it's gonna be convoluted and difficult to use. So just go to try maple.ai, log in and you won't notice that you're using end to end encryption. It's that easy. And so that's where I would really point people is just go create a free account, give it a try, and then you'll taste what it's like to have an app that is built on open secret.
[00:56:11] Shawn Yeager:
Terrific. Yep. I use it. Great product, smooth as can be, and I'm really excited to see iterations of it and what others build. Thanks, Mark. Appreciate the time. Talk to you soon. Okay. We'll talk to you. Bye.
Bitcoin Takeover at South by Southwest
Mutiny Wallet and Open Secret Transition
Understanding Secure Enclaves
Privacy and Trust in Technology
The Future of Data Privacy
Maple AI and Private AI Interactions
AI Agents and the Role of eCash
Business Implications of Secure Enclaves