16 April 2025
S01E01 John Robb – AI, decentralized networks, and political swarms - S1E1
.jpg)
What happens when trust—the glue holding society together—starts to crumble under the weight of AI, decentralized networks, and political swarms? In the premier episode of Trust Revolution, host Shawn Yeager sits down with John Robb, a former Special Ops officer turned tech visionary, to explore this question. With a career spanning the battlefield and Silicon Valley, Robb offers sharp insights into how "red" and "blue" network swarms are reshaping U.S. politics, how AI-driven augmented reality might disconnect us from reality, and how social AIs could spark the next economic revolution. From the fertility crisis to the future of decentralized systems, this conversation unpacks the challenges and opportunities in a world where trust is increasingly fragile.
John Robb is a US Air Force Academy graduate, a former Special Ops officer, and a tech entrepreneur. He founded Gomez and served as CEO of Userland Software, a key player in the development of RSS technology. Robb is also the author of Brave New War and a leading strategist focused on decentralized systems and artificial intelligence.
For more from John Robb, check out these links:
Subscribe:
Music in this episode by More Ghost Than Man.
John Robb is a US Air Force Academy graduate, a former Special Ops officer, and a tech entrepreneur. He founded Gomez and served as CEO of Userland Software, a key player in the development of RSS technology. Robb is also the author of Brave New War and a leading strategist focused on decentralized systems and artificial intelligence.
For more from John Robb, check out these links:
- John Robb’s Substack – His current platform for insights on technology, trust, and society.
- "Brave New War" on Amazon – Purchase or learn more about his book on modern warfare.
- Userland Software and RSS – Background on his role in pioneering RSS.
- John Robb’s Wikipedia Page – More details on his career and contributions.
Subscribe:
https://podcast.trustrevolution.co
Music in this episode by More Ghost Than Man.
[00:00:04]
Shawn Yeager:
John Robb, thanks for joining. Oh, my pleasure. Thanks, John. I I appreciate you taking the time. As I had mentioned before we got started, not to be too gratuitous, but I followed your work since I read Brave New War in I was trying to find my copy, 02/2009, '2 thousand '10, fascinating read. And I think, in in my view, was one of the first bold sort of reframings of open source. And what does that do to the asymmetry of power? And what does that do to enable actors who, you know, may turn some of these open technologies against against modern sort of society and culture. But but I think, you know, with your recent writing, I have I have really, really enjoyed network swarms and the ideas that you're putting forward, and I certainly wanna give you an opportunity to to frame that. I mean, I think what, I think is interesting for for the audience, for the viewers, and and they'll see this in the show notes is, correct me if I'm wrong, John, but you got your start at the US Air Force Academy.
You went in then to US Special Ops Command, then decided Yale was the right follow-up after after special operations, which I think is fascinating. And then I presume took a lot of that and synthesized it into your role as an analyst at Forrester, where you were tracking some of the very earliest Internet technologies and then founded Gomez and had a very significant exit. Was it just, just shy of $300,000,000 to, CompuWare, of course, at that time was a was a massive player. And the Easter egg for me, I have to say, was discovering that you were CEO of Userland Software, which I just think was one of the most delightful companies. And, you know, for those who may not know Userland, they certainly know RSS. You know, if if they're if they're techie, as I think a lot of a lot of listeners will be.
[00:02:02] John Robb:
And so a brief aside, tell me about Userland software. How was that? Yeah. So I was at Forrester in in '95, and I got pulled into the Internet. They didn't have anyone covering it at the time, and I was a new guy. You know, I'd just been doing tier one special apps and went to Yale to kinda fill in the gaps in my business knowledge. And, I was showing that all these new technologies, Internet technologies to everybody in the company, and they said, okay. Why don't you start that service? And it went like it was, like, 25% of the revenue after I launched it and started writing these big picture reports. I had a kind of a knack for it. And, one of the reports I wrote in '96 was called, personal broadcast networks.
This idea that we would be posting things that people would be subscribing to them, and then we could create all these personal publish subscribe networks, that would constantly keep us up to date on all these other individuals. And, everyone at Microsoft and Netscape and others came to me and said, how do we how do we build this? And I go, I don't know yet. But it feels like that was the fact that Microsoft on the browser team at that time, trying to get into the trying to get into the, you know, the right quadrant. So please go ahead. Yeah. Mike Homer over when he was, before he died was over at Netscape, and, he was head of marketing there. And he came to see he was kinda like, how do we do this? But, I, went and did Gomez, and we did that exit. I think it was, like, sold for 295,000,000 was was the exit on that. And then, in 02/2001, I, you know, I had something else that I wanted to do, and I I wanted to see if I could make this idea real.
It was basically social networking. And, in 02/2001, I saw a company that was doing a lot of that called Userland. And, Dave Weiner, who was the chief programmer there, and, he needed somebody to run his company. And I came in and ran it, and it was like, you know, three, four person affair. Robert Scoble was our marketing for a while. Punching way above its weight, though. Yeah. And, we built, basically, social networking as we know it today. We we published RSS as an open source standard, and, we created a tool that you could subscribe to RSS feeds. And the feed on our tool, we called it radio. It's a desktop tool. I mean, it looks exactly with the same kind of constraint on characters that you get in Twitter. So it looked exactly like a Twitter feed. And the publishing, you could take it from Twitter, that Twitter kind of interface, and then publish it into a weblog. And we established a lot of the early standards for how weblogs are published, reverse chronicle chronological order, you know, with time stamps.
You could take what other people you you subscribe to and then post more on it. So and it was actually blue. It looked exactly like Facebook. So, that was the default color. I mean, it's exact the Facebook color. So we did that in 02/2001, and, it was too early. I got a I got experience trying to get people to see an emerging technology from the Ground Floor. And, it was excruciating. You know, I I saw the things that I told people how to use it and how it could be applied. Spent a couple hours with Ray Ozzie before he took over Microsoft for a while. Wow. And he was trying to figure out how to if this is something they wanted to employ, that could have put them in amazing position if he if he had taken that in. But I guess I wasn't convincing enough. And, we got New York Times to sign on to do an RSS feed and, as one of our exclusive feeds for for the radio tool. And once they were on, all these other places going on, NPR and everyone else, just wasn't enough.
We just ran out of steam, and, I left before Facebook and Twitter took off. So, went on to do other things. It just, it was it was very excruciating to go through. It it it's hard. People have cognitive filters. They just can't see it. They just couldn't see the value, and the dynamism. And I just saw it. Like, it was clear as day, and I thought it just couldn't communicate that well enough, I guess. But, anyways, they say, my experience early being early is,
[00:06:15] Shawn Yeager:
an even greater or as great a a source of failure as being wrong in startups. And so, it is tremendous. I mean, I I do though you know, again, as I said, it delights me to to to find that you were were running Goosierland. I my Mac two SI, you know, I think I remember is what I had or a Quadra nine fifty or whatever the details are are not are not, important, but I distinctly remember that application, that software. And, it has to be It was it was a cool experience. Yeah. And, I mean, you know, in doing that, and I think, to to connect the dots, I raise that because that was an extremely early experiment in I don't wanna strain the word decentralized, but this was was pushing, you know, a lot of this to the edge or to the end, to the end user.
And so, you know, we fast forward now twenty five plus years later. I'd I'd love to to get into, John, your framing of trust. And, again, I'll I'll sort of say because you have worked in special forces, because you have, built and sold a business, you you've got two, very, very different perspectives. How do you view trust? I know you thought a lot about this. Mhmm. I mean, trust
[00:07:29] John Robb:
is a minimum requirement for cohesion. Okay? And cohesion is a minimum requirement for high quality decision making. And, cohesion allows you to view the other participants in your decision making process as, friendly to that process, that they're not trying to sabotage it. It also works at a at a national level, with political discussion. So if you see, somebody with a different opinion that's an opponent, you can argue that and you could but you fundamentally trust them in the sense that they're working for the good of the nation to just disagree. But, I mean, the current environment, we see many of the people on the other side of the political spectrum as enemies.
They're outside our tribe. And that tribalism, it would you know, I dug into kind of find the kind of origins of trust is that we were wired at a biological and neuro level for tribal life. And, life inside a tribe was a high trust environment. You knew everybody there. You were reliant on everybody there to get through the next day, the next year. Their success made you more successful. It was largely a gifting culture and the culture where you gave things and, the best better the gift that you the more status you had. And then we broke with the tribal cult culture and started to scale our society, adding more people. And that tribal connection, knowing there was other people, wasn't sufficient then. And, we came up with kind of acts or work around trying to make that trust possible, and we came up with a legal system. We came up with bureaucracy, bureaucracy being a formalized way of getting things done that operated on a civil code, and there was a certain level of professionalism, associated with it. And so you could trust that its output would be high quality. And then, we replicated tribalism in the sense by, coming up with nationalism.
And nationalism gave us that cohesion at the national or nation state level, that allowed us to operate as a cohesive unit. And the marketplaces, of course, you know, they have to be fair. You have to make sure that they're not being corrupted or gained or or, in, you know, that kind of need for, you know, making that process transparent, is, you know, behind many of the rules that we see in terms of financial disclosures and things like that, and how contracts work. So we all of these things that we have are you know, that we rely on now are workarounds. Right?
So, you know, that's kind of my my thinking on this is that can we use current and and currently being developed or, technologies to find better ways to do that? Okay?
[00:10:28] Shawn Yeager:
Improve the quality of trust. That's that's very, very helpful. And I think, you know, it is interesting to to look at all of our technologies as enamored as some, myself included, may get with them as attempts to scale trust, attempts to get beyond Dunbar's number, attempts to replicate the drive, as you say. Well, on that note, your work has, for a long time, highlighted, highlighted how centralized systems are prone to collapse under pressure. And you've been writing quite a bit and I know engaged in in many, energetic discussions on x, on Twitter, about AI. How is AI, in your view, making these weak spots harder to ignore?
And and what's the clearest sign that we're trusting these centralized systems too much?
[00:11:19] John Robb:
Yeah. I mean, a centralized system, like, the big tech companies are currently running with social networking and other things. And, you know, social networking now is upstream of the, legacy news system and how we actually manage information transfer now and process information. Problem with that is that, it can be disrupted, and it's easily disrupted from the dot from underneath, but it can also be it could another failure mode is that it can be locked out. Right? And, one of my big worries back in 02/2017, was that, we were headed towards a potential lockdown of the information ecosystem, and that, the tendency of I I I I call our, political factions blue network, red network. The tendency of the blue network after it, you know, kicked Trump out of office, in 2020 was to lock down these systems to try to minimize the ability of the opposition to say anything, anything that wasn't approved. And, they got support across the industry.
Categorized all violations of these rules as as kind of evil and things that cause bans and stuff like that. I'm not trying to, you know, paint them as bad guys, but they saw it as a need for stability. And this was it needed to eliminate misinformation and disinformation. And, the rules were getting applied in more and more instances, both at the technology level with the the big companies as well as at the governmental level and and and different organizations. And, we just I saw I've been writing about AI during the same period since 02/2017, how it would actually work at at you know, when employed. So I was ahead of the game on this, and I saw AI just about to hit.
And that was my big fear was that AI would be employed as a means of, locking down the entire system, getting to every conversation, managing those conversations, and then controlling them and redirecting them and, you know, not just simple censorship, but even more. And, once that happened, you know, billions of people in real time could be managed. All the topics that were, available to think about and talk about would be would shrink down to a narrow orthodoxy, a narrow, group of of approved topics. The And then all the window. Yeah. And then well, yeah. Not just over to everything that could potentially challenge, stability would be wiped out, and that means all innovation dies. Your ability to actually push back against the establishment would or, you know, how things were operating dies your ability to adapt to changing conditions and everything else would die.
And, I mean, from perspective of of somebody who studied, like, totalitarian states, I mean, it was like, this was these corporations were building essentially like, totalitarian state in a box because all you need to do is turn it on, and they could do it cheaply. They didn't need, you know, massive bureaucracies of of people like they had the Stasi had of going through everybody's emails by hand and and and and that kind of thing. It could all be automated. And, that ended with with Musk taking over Twitter and unwinding and providing an alternative kind of information conduit that broke open the system. And and you're on your side. Is it is it that simple?
[00:15:00] Shawn Yeager:
Was was his buying Twitter and taking over Twitter that momentous?
[00:15:05] John Robb:
%. Trump never would have won. I mean, he didn't win in 2020, not because of election fraud or anything. It's because social networking locked him down. His ability to route around the media went to zero. It, you know, it was shut down on Facebook. It was shut down on Twitter. It was shut down on, all the other places that you would normally get information. And his supporters were sent to the boonies. And we would have seen a repeat of that in 2024 if not for Musk's acquisition of of of the company. I think from reading his interviews that he was motivated for much the same thing I had, had seen, and he saw it too.
And he was worried. And and the trigger for him and the trigger for me that this was potentially very dangerous was the response to Ukraine. I mean, we've been playing back and forth with with Russia, I think, with the mistaken policy of NATO expansion trying to turn a newly democratic company our country into an enemy. And the guy who came up with containment, George Kennan, said it was a bad idea. But nonetheless, it was a back and forth, and and Russia was pushing back against it as we pushed up against their border. And and, there'd been other conflicts in Ukraine and in in other places, and we turned this into using you know, the network turned it into a new cold war and pushed us up towards a nuclear confrontation where it didn't, you know, exist.
I mean, you could travel to Moscow and and take a vacation two years a year prior to that war, and and all of a sudden, they were being disconnected by millions of people, you know, across government. And and people inside government, inside corporations were taking action on their own to push us towards cold war, towards confrontation.
[00:16:54] Shawn Yeager:
Yeah. It fits And this is preemptive and overcompliance. I mean, I won't name a particular company, but I was with one that aggressively, aggressively overcomplied in my view. And so to your point Yeah. Yeah. They reframed the whole conflict from,
[00:17:10] John Robb:
you know, regional war and, you know, back and forth that we would normally negotiate our way out of and and and deal with into, they just escalated it. Like, this is like a new Hitler emerging or anyone who said anything else would be swarmed and, stomped on. And, that swarming behavior, actually deciding war you know, whether we're at war or peace was kind of scary to me. It was like it and it was unreasonable. It it you know, the way the swarm operated and, was that, it saw the enemy as, as unconstrained evil. And Putin was unconstrained evil because of his support for Trump, you know, which is the seed of it, interviewed in our elections, and that, he be instantly became Hitler, and therefore, we had to fight him. And, yeah, no. It was a it was a it was a scary moment. You couldn't there was no nuance, you know, to the whole position of the swarm. There was no negotiation.
It was only total victory, collapse of Russia, that kind of thing. So I don't wanna go off on the Ukraine thing, but the, you know, the swarming behavior was was, you know, scary to me. And then it you know, we've seen swarms with, George Floyd and other places that they it can't be reasoned with. And it distorts the truth, and it can't
[00:18:33] Shawn Yeager:
it doesn't stop when it should. So, I think that's what's so powerful about that that that word that and and what it conjures as compared, say, to a smart mob, a term, you know, used, I don't know, fifteen, twenty years ago maybe, is the swarm is, as you said, it it doesn't stop until it has, you know, complete victory. And and I'm curious, John, there are, you know, no end of of conspiracy theories. But if if we if we take a I take a naive approach and I look at incentives, what incentivized the emergence of the red and blue swarm and the all consuming total victory?
Are these emergent behaviors or or was you know, was and is there was was and is there an incentive that drives this? Well,
[00:19:25] John Robb:
I've been tracking, networked organizations and their emergence, and I I saw these networked organizations as, potentially another layer of societal decision making. So we had tribal, which is nationalism in our current context. We had bureaucratic, which was this kind of professional system for mobilizing resources and analyzing information and and, managing people at at scale. And then you had markets for discovering information, for, allocating resources based on need. And, we combine them in our system of governance by using markets for elections and to elect the leader of the bureaucracy and, that, we use nice nationalism as a way to motivate people to point in the right direction, orient themselves towards, you know, advancing the good of the nation, and then cooperate and provide cohesion in our decision making as a whole. So we combine those into a useful whole. And all of those developed with the advent of the printing press, and it took five hundred years to roll out. And it was a bloody, horrible process of creating these organizations, creating these, means of of of deciding what to do next, and that we arrived on this. And now here comes something new, you know, this networked layer, networking layer, you know, for decision making.
We saw it in the surge. We saw it in protest, like Egypt and other places, Tunisia. And then we now see it emerge in politics. And in terms of the red and blue, it emerged in the, you know, the red network, that supported Trump that tried to get him you know, got him into office the first time in 2016, came out of the blue. Okay? And it became out of the blue because he was challenging the globalist orientation of the current administration and and establishment. And, you know, that globalist orientation is, like, we made all our decisions as a country pointing towards globalism, and that we minimize nationalism and minimize the the kind of system that we were running during the Cold War where we had to have a strong nation state that was economically independent, that had an eye on prosperity and broad broad prosperity for the for everybody was focused on, national defense, you know, rather than defending the world. And, the we shifted at the end of the cold war towards a globalist orientation, and all the policies increasing the world had flat people and everyone else, you know, started thinking that everything you know, we didn't need any trade barriers or any kind of economic independence. We didn't need to worry about prosperity. We could open up all of them every American to competition in the rest of the world because that'll make them stronger, and that, we didn't deliver on prosperity anymore. And we we started deemphasizing nationalism. And the elites on the whole, from what I saw, and I, you know, interact with a ton of them, is that they started to see themselves more and more as globalized, you know, global people.
They weren't nationalist anymore. And that that weakened the nation state, and we ended up with a a a country that didn't really have control over its borders anymore, didn't have a control over its, the messaging anymore, you know, the kind of narrative that kept us together and helped us move forward. It was starting to get really messy as the Internet started to eat away at it. And, we didn't have control over economics and finances because we globalized, and and we were making mistake after mistake, like, for instance, February bringing China into the WTO because we thought it would democratize them.
And then instead, what we ended up doing is creating, an opponent, a competitor. We grew them because we didn't go after fair trade. We just opened our borders. It opened our our trading system to them. So, people are kind of fed up with that if you said because, you know, what the establishment was saying is that whenever you mentioned, focusing on US defense rather than, you know, policing the world with military force. Lots of useless wars and messed up in the rest, as we saw in Iraq and and toppling governments that caused more misery and than they saved us from, like Syria and Libya and other places, the bad of those dictators where they weren't as nearly as bad as the stuff that came after. We created ISIS because of that too. So it's like, yeah, we made tons of mistakes. Is that if you said you wanna focus on US defense, say you're, an isolationist, and you'd be dismissed from the conversation completely.
If you say you wanted fair trade, they'd call you a protectionist. You said you want self reliance, you're a protectionist. You were dismissed. And if you wanted a a good, you know, control over immigration and borders, you were racist, or you were a protectionist because you're not letting these, you know, this flow happen. So, that restriction of the dialogue and the increasing number of failures we saw, like, with the financial crisis and the failure to, you know, punish anyone for that and and everything else created this great well of discontent with the globalist orientation.
And, Trump tapped into it, and it put him into the White House because they knew that he would disrupt things. They weren't electing him as a candidate. They ran a open source insurgency. It was dynamic. It wasn't run by him. You know, it was run by places like The Donald on on Reddit. You know, I was interviewed there, like, a week, week before Trump. So it's like, okay. How did this work? And I explained how it was working for them. And in places like, you know, even, like, anonymous and stuff took a lot of my writing, and that was the founders used that to to found it. And, this kind of open source opposition to the establishment emerged. And it, was our first taste of of kind of network politics.
[00:25:23] Shawn Yeager:
And so these swarms, it sounds like you emerged.
[00:25:26] John Robb:
Swarm was kind of a later thing. I don't it's more of an uncertainty. Open source is surgery works like, very much like open source software. So it has a single single goal that kind of unites everybody. And they have all different reasons for motivations for joining it. I mean, there's no barriers to entry, so they can join. They can contribute. And if the contribution works in terms of, you know, memes or or some kind of information attack or or defense and it works, people copy it. And then Trump was very good at kind of interacting with that and pulling that out. And that, it's very innovative. I mean, you know, open source insurgency, open source politics, it was extraordinarily innovative, but it had one fatal flaw is that once you achieve the objective, the open source organization falls apart.
Okay? I mean, no one can agree exactly what he should do once we got this this grenade into office. And, it fell apart, and he was basically left there alone for four years to basically, you know, swim against the a tidal wave of opposition from the establishment. And here we are go ahead, please. No. Part of that opposition was the formation of a blue network. And the blue network worked different than the open source insurgency. It waged more warfare. And the moral warfare, it's often a feature of guerrilla warfare, and that, what you wanna do is you wanna make the opponent look, immoral and not worth not not worthy of any legitimacy, shouldn't shouldn't be in office. And they founded that based on a, opposition to known evils, racism, sexism, colonialism, antisemitism, that kind of stuff, all those things that they hated.
And, they formed a kind of tribal trust layer based on that. So they, if you're opposed to these, then you are part of their tribe. And anyone who sees examples of those, you would see you would have empathy evoked from those examples, and it that bonded you kind of a kind of a, network tribal kinship with these other players. So if you saw George Floyd, for instance, with the neon in his neck and you hated the officer for doing that, you your outrage would bind you with not just George Floyd, but everyone else was outraged by it. And your enemy would, you know, your enemy would be the police officer, and everyone else's enemies would be the police officer in the system, that he represented.
So, instead of having a wild number many, many different kind of opinions on how things should work and attacks from everywhere, a kind of a maneuver warfare. Within the blue tribe, it was a cohesive pattern of, morality. So they parsed all the news at a kind of grand, co curated scale where everyone took every bit of news and spun it. You know? And we live in a kind of a packetized news environment, and everything's broken down to very small bits. And everything was actually parsed and and spun and put into a context that. So if you saw x happen, then you would say, okay. This happened because of racism or this happened because of sexism or this you know, the response is colonialist. Their back their response is anti Semitic. And that all happened in real time, and and it created this huge pattern that everyone was supportive of. And it grew large enough, that it had influence and it it aligned corporations and government with that pattern.
And that was used as as a means of enforcement for limiting the conversation on these platforms.
[00:29:15] Shawn Yeager:
And so it sounds like if I if I hear you, this is fascinating. The blue swarm learned, in effect, from the failure of the red swarm in that, as you said, once it achieved its goal, it dissolved. There was no cohesion. And so in in filtering everything, as, you know, either for or anti the swarm's identity, then it it overcomes the short of the version of the swarm that came before it. And then it sounds like connecting the dots, Twitter, Meta, all the companies that that enacted and conducted all of this, in my view, egregious, censorship and other behavior, what did they see, and what were they capitalizing on? Okay. So
[00:30:03] John Robb:
the Boostworm drew its members from academia, from government, you know, from, big corporations, from the tech crowd. And, they formed a cohesive unit. They created a a kind of a reverse tribalism. So tribalism, the oldest way, of course, to create trust, like I said at the start, is that it's usually formed based on a positive narrative, why we came together, why what what we've overcome and where we're going in the future. And, that creates a kind of fictive blood kinship, meaning that you're not actually related to these people directly. They're not part of your immediate family or extended family, but you are they're members of your tribe and you are connected with them at a deep level. Instead of positive narratives, they worked on negative narratives. It is what you were against. Right? So it created a very aggressive combative kind of mindset.
And it also had a big flaw. So when the Blue Network won the White House with a guy that didn't even campaign I mean, Biden mailed it in. I mean, he didn't do anything, and he was he was brought into the White House, based on the Blue network's kind of coercion of these big companies into, you know, strict alignment and employ you know, alignment with their view of the world. They started, you know, really pushing to try to squeeze out the political opposition completely. And, they also had an inability to police excessive use of their, way of looking at the world.
They couldn't say no. When somebody applied that morality, that moral structure saying, you know, any opposition to what I do, if it's in opposition to this, you know, prohibited activity, this evil activity, it's okay. And so, you ended up, normally, it would be like, okay. I'm I'm just take trans for instance, putting men in sports. And, you know, it's whether you agree or not on agree on the issue, it it was highly unpopular. And, surgeries for kids, again, lightning rod. It proved unpopular even though they pushed it out and it was being rolled out at a at a mass scale. Not to pick on trends or anything, but that was just, like, example of its inability to actually police the excesses that would cause, you know, widespread resentment, DEI policies and things like that where, you know, hiring of, you know, certain subgroups were was prohibited in favor of other groups. And, of course, that created more and more resentment.
It wasn't rolled out with any kind of nuance and subtlety and insight into, the viability of it. That generated all this negative resentment, and it was unlocked with Musk acquiring Twitter and turning it into x. And, it was then marshaled by instead of a bunch you know, millions of individuals like an open source insurgency in in in 2016, it was, marshaled by these bigger professional accounts. Hundreds of these accounts, millions of hundreds of millions of followers collectively, like a Musk with a couple hundred million followers. And and, they challenged establishment narratives, basically, blue network narratives, question DEI, question immigration, question this and this and this. It started poking holes in their in their narratives, and, people followed it. People signed on to it and, revitalized Red Network emerged. And, it had you know, the Blue Network was, you know, being taken apart piece by piece by piece, by these, you know, Red Network accounts.
And, it was increasingly hard for them to actually, you know, get their message across, and they lost the White House. I mean, I guess maybe the final step on on them falling apart is one of the founding the Gaza incident caused a huge rift is that one of the founding, factions within the blue network is this opposition to antisemitism that ran a foul of those of, you know, opposing colonialism, fascism, and and racism. And it was irreconcilable, and that, that split the network. And what I usually, what do you call is a noncooperative centers of gravity, is that when waging war and warfare, if if you want to break the opponent into noncooperative centers of gravity where there are different groups within the opposition that would fight each other over moral issues.
I don't trust you. I don't. That kind of thing. So, that was the final nail. Red network won. Put Trump in the White House, but it didn't go away this time. The big accounts took over, and, they served as the, most of the cabinet positions. They were the ones that were selecting a lot of the people that were going into the senior positions throughout government. You know, the deputies and other things behind you know, below, they were putting their people in. A lot of those people were tech people, and they did that. And they ran with their narratives. I mean, you know, Musk decided, okay. I'm gonna push cutting the budget. He came up with Doge, and he got the green light to do that. Started to trying to account for where all this money was going.
Then they, you know, it has more sustainability than the blue network, but I think it has many since it's really kind of early, version of network decision making, like the Blue network was in this last iteration, is that it have flaws in its process that will eventually cause it to blow up in the same way the Blue network blew up. Because right now, the Blue network, for all intents and purposes, doesn't exist as it doesn't really have any weight anymore. It's gotta reform and reformat itself, and it will. And because that moralism is actually quite useful in decision making, because it sets boundaries and standards, but maybe not as maybe a minimalist approach that might be the better way to get it to grow faster and become more permanent. But, yeah, the red network that's in control right now will
[00:36:19] Shawn Yeager:
Probably suffer the same as well.
[00:36:21] John Robb:
Right. And but just like the evolutionary process, it dies. It comes back better, more evolved. Hopefully, not we don't end up with a version of network decision making, network politics that is truly disastrous because we could. It you know, it's so strong. I mean, look how it taken less than ten years before it took over the the whole political system. And that that to me is is what's so
[00:36:49] Shawn Yeager:
awe inspiring and terrifying about it is the ability to simply be washed downstream, you know, and or or carried off with the swarm, whatever metaphor we choose. And and I think, you know, with that as the backdrop, here we are in just a few years. We have the ability to manipulate information using LLMs, AI, deepfakes are trivial, automated propaganda. You know, it has to be rewiring how we decide. It is rewiring how we decide what's real. How do you see this shifting the way we choose to trust institutions and each other? Yeah.
[00:37:26] John Robb:
Well, having having two networks looking at the news tried tends to keep it more honest. So, you'll have a a a built in, questioning of of of a piece of news or or an item or event that favors the other side, particularly if it can be manipulated. And there are people out, you know, constantly looking at that, looking at for fabrication and misinformation, and they'll call people out on it. So I don't see disinformation or misinformation as a problem to the extent that we had prior to, during the six years or seven years prior to this last election. And everyone was crying about it and and mad about it. But that was largely just an excuse for cracking down on stuff and controlling stuff and controlling political outcomes using social networking and and information flow. So,
[00:38:18] Shawn Yeager:
and do you do you see and I and I hear you and and that, you know, that's what's echoing in the back of my head. It seems to me the very boogeyman that was called out, as you said, four or five years ago is now in some of these deepfakes or just simply AI generated video indistinguishable or rapidly approaching six months, twelve months indistinguishable from reality. So do do you discern, or or would you still say that it it is not a a a serious concern?
[00:38:48] John Robb:
I don't see the shared information environment, under that much threat from that kind of attack or that kind of, development. I see it more of a a threat at the individual level because we're really, really close to AI generated AR augmented reality. And, when that hits when that hits, it becomes easy where you're immersed in a visual and auditory, environment that's completely AI generated. The disconnection that will happen at the individual level is insane. I mean, we're already, you know, suffering kind of a sidal collapse at one level. I mean, you know, I wrote a report on the fertility collapse at the global level that just, this year. It was the first year that global fertility fell below replacement.
And the information is just you know, the numbers are just catching up with that, you know, effect, but the and it's falling really fast. Make it get down to, say, one point point six, I think it was, the the lowest level, I saw in in, South Korea, the most wired nation on Earth. Right? Point six is one child for every four women. That's the way to kind of totally collapse the global population, and no one seems to be able to do anything. All that It's too complex already, because there's so many factors influencing it, and it's obviously shown a sign of deep dysfunction and how we our society is functioning. And now we're about to throw this AR on top of it where you could have the most trusted people in your life are not even real.
Right? I mean, you could see them and hear them, and they'll be with you constantly, and they'll talk to you, not only, you know, just confidants and and and, advisors and educators and and, you know, romantic, you know, connections. These people are gonna be these fake people are gonna be these fake worlds that are created are gonna be just so much better than real life. And without without moralizing
[00:40:50] Shawn Yeager:
as to whether that's good, bad, or neutral, what do you think is necessary for individuals? You know, I think about still and and certainly historically the gap, at least in The US, as to media literacy and the ability to critically think about media. And now how does one critically think about this this immersive bubble that they live in where everything reinforces their views? You know, do do you have a position, position, John, on what's necessary to to get through this next phase?
[00:41:23] John Robb:
Oh, well, reality distortion field. Yes. Well, there's a kind of political tribalism, you know, as part of that solution, this network layer on how we think about politics and how we parse it at the at if if you want other people to talk to, you have to, you know, pick a fact pick a a tribe, in a red or blue
[00:41:41] Shawn Yeager:
network. Right? And, And two is sufficient, you think? Well, two
[00:41:47] John Robb:
But other one Seems to work within our context of our society. I I think what happens is if you get below two or you go to three or you go to four, is that the dominant party will walk away with it. It's kind of like that Metcalfe's law. Right? I think, You know, a a network that's, you know, 50% larger isn't just 50% valuable. It's many times more valuable, and it tends to concentrate power in the hands of that that larger network. And, I mean, I I think this the reason why I like the idea of having more information flow, even if it's painful, even if it's largely wrong, even if you have flat out earthers out there, you know, arguing nonsense, or people coming up with incredibly stupid things and pushing them, is that you have to have that openness in order to solve complex challenges.
We live in a complex world now, not a complicated world. Complicated world is, you know, something you could solve using bureaucracy and planning. In a complex world, you you get hit with all sorts of things that you don't have a clue how to solve. Okay? You actually just have to throw things at it, ideas, and see what works. And then when you find the thing that works, then you reinforce it. And, you can't solve those problems if you have a, you know, structured information environment where everything's, like, limited. You have to have a free flow. You don't know where the new idea that's gonna save everybody will come from. So,
[00:43:20] Shawn Yeager:
And that's a that's a great, I think, pivot to or or or shift to we've talked quite a bit about the individual. On that note, you know, you you've warned about AI throwing supply chains and brands into chaos, and and I think we can extrapolate from our conversation so far. But to to go a bit deeper, what can organizations do to become more resilient to these changes? If if if the consumer, you know, from the standpoint of a of a business or a brand, are these highly tribal, swarm oriented, AR immersed, you know, individuals,
[00:44:00] John Robb:
how do organizations need to think about that? Yeah. It's it's gonna be a problem for organizations because I I did a report on the, you know, the global, Edelman's got a global trust survey. Yes. Familiar. You know, Microsoft sold a PR agency. Absolutely. Yeah. And so they found that in every country, every developed country, corporations are more trusted than they go. Every a long shot. And that's a recent, is it? Yeah. I have a report on that. So you can dig back in the in the in the substack and pull that up. I will. And it's basically saying that, people wanted corporations to do what the government isn't doing and take control and that people would not associate with a company that was aligned with one of the opposing political factions.
But they wanted them to get into involved anyway. So, it was just a weird thing is that, okay. Here, they want these corporations to kind of solve the bigger problems that governments aren't solving, which is all these big global problems, in terms of, you know, everything from climate to, you know, economic justice to other things, but also kind of weigh in at the domestic level. So corporations are are being asked to align with one network or the other political network. And, we can see that it that turns out to be disastrous, everything from, you know, Miller Lite to Tesla. Right? So everyone everyone who aligns too strongly with one network is attacked by the other. Yet they're being asked and expected to kind of weigh in and and do things.
In terms of AI, I mean, the way corporations are gonna have to you know, in order to participate is they're they're gonna be collecting data on everybody in their company. They you know, that's how they're gonna build those internal AIs necessary to kind of automate,
[00:45:44] Shawn Yeager:
the cognitive capabilities necessary to to grow and scale. And and you're talking about, John, if I understand you, employees, not not necessarily customers.
[00:45:53] John Robb:
Oh, yeah. No. They'll they'll suck their employees dry of data inevitably. I was trying to push at at the senate a couple years ago trying to get people to adopt a data ownership policy so that we would own our own data, everything we put up. And that would allow us to have a say in just before AI hit. I think it was 2020, I think it was. And, just before AI hit, I was saying, okay. This data is the most important thing in the world. If you want to have a participatory economy where everyone's votes rise and there's, you know, limits and controls over, you know, what gets developed.
Give people ownership, of the data. Let them have a say and generate either income or, exercise control over how it's used. And then you have a stake, an equity stake, a good you know, one of those positive equity stakes in these AIs that are being developed, which will become the most valuable technological artifacts ever created. And they, you know, said it, of course, didn't believe AI was real at the time, so they kinda blew it off. And now we're in this situation where everybody's gonna be sucked, dry of their data strip mined. It's kinda like a it's worse than feudalism. Because feudalism was that the feudal lords are owned all the land and still kind of persists in England to a certain extent today.
And that serfs would work that land, and they would be obligated to pay off a certain amount of product.
[00:47:16] Shawn Yeager:
Mhmm.
[00:47:18] John Robb:
And so there was never any motivation for them to do anymore. But there were, you know, they were they were it has an they were obliged to give them the produce, and in in exchange, they were allowed to live on that land, take the access for themselves, and get some level of security. But in this kind of feudalism, we're sucked dry of our data to build these amazingly valuable artifacts, and there's no real reciprocal thing other than you get to use them sometime. Right? And you actually even have to pay us to use them. In a corporate level Go ahead. No.
In a in a company level, it's gonna be really tough because, screenwriters guild kind of got into this early, and they kind of set up some barriers. But most corporations are just sucking their employees' drive, all their emails, all their utterances inside the corporate setting, every work product. And that's gonna be fed into AI as to, not automation in a simple sense where it's not mechanical and it's it build cognitive replacement.
[00:48:18] Shawn Yeager:
Yes.
[00:48:19] John Robb:
They can think in complex and and and ways, and handle handle difficult exceptions. Because you know how the the standard thing is, like, 80% of all corporate problems are that take up 80% of the time are nonstandard tasks. It's easy to automate the bottom 20% because you can you could you could build something to do that. But if AI is gonna take that all over. Right?
[00:48:44] Shawn Yeager:
So the incentive for a given corporation to be able to tackle that is immense. I I did some work in 2020 on personal data ownership, legal ownership, and and and and technological sort of containerization, if you will. And Yep. Yep. I want to be more bullish, but, you know, what we learned was that most individuals would, you know, give away all their personal data for a free slice of pizza. And so, you know, it sounds like, John, you don't see a lot of bright, sunshiny days coming in that regard. But is there hope? Is there do you do you see anything on the horizon that would that would address personal data ownership and and would balance power in that regard to some degree?
[00:49:27] John Robb:
No. And so, it's not just your demographics. Most people wanna say data or they think, oh, just your demographic data. Oh, no. Some preference information. I'm talking, you know, how you blink your eyes and how you move your mouth, where your tone of your voice, your the way you interact with other people. All of that data is being sucked up because we've seen a strong correlation between the amount of data set, you know, applied to training and AI is directly correlated to its capabilities intelligence. A digital twin. I don't know if people are still using that term, but I think that was a term of art for some time. Yeah. No. The I mean, these AIs I mean, most of the guys actually working on these AIs come out of this mind model. Yeah. They're trying to create a model of the mind, and and it replicates a individual human mind unconstrained by, our biological limit. And, mistake was well, they, came up with a a way to kind of, reverse engineer it and kind of approximate a mind model by gathering all the social data, all this output, and then creating a model of that and then trying to make it act more like a single mind. Right?
It's always kind of approximation. But the social model that we're use we call LLMs and all this other stuff is is actually more interesting as a social model than a mind model. You know, as because it contains all of these different perspectives. I was writing fiction with this stuff. I can tease out the perspective of a an Italian mother in New York sitting in the fifties, and it will respond and think like that if you if you because there's literature and thinking that reflects that inside
[00:51:06] Shawn Yeager:
that that meta model. Yes. I mean, I know I know you're you're very fond as I am. Maybe fond is not the word. You're you're experiencing the the benefits of grok as compared to others. I believe you've you've posted on that recently, and it is tremendous. It's amazing. But as you say, I mean, it's to some degree, you know, a deal with the devil. So you've got, you know, this blended military and tech background. You've seen systems fail up close, sort of across the spectrum. For those leaders who are facing this AI driven future, who are looking to to collect it all, you know, perhaps with their employees more than their customers, What do failure modes look like? What what should they be aware of? What's the cautionary tale to doing this? Or or is it just, you know,
[00:52:01] John Robb:
the incentives are there, and they're gonna do it? Yeah. Don't let third party suck all your data out. Alright? And control that. And and limit your use of third parties to actually crunch it and create, virtual employees. Optimally, if you do create virtual employees, is that you so do that on a open source AI platform. And is it as independently as possible and that you have complete control of the data, if you have, you wanna, you know, build up employee trust and get them, you know, working with these AIs, these virtual workers, is that you give them some level of ownership stake in the value of that AI.
And that, I I I see that, you know, they're talking about agents now, which was inevitable. Mhmm. AI agents and and that's yeah. I've been through that since twenty years or twenty five years. We were doing Microsoft in the nineties. Yeah. They're they're redoing that whole thing. What we're really gonna look at is, social AIs. Okay? Social AI is, the way it's gonna roll out inside the corporation, inside our daily lives, inside of small companies and big companies is that it's going to be, a virtual employee or coworker or educator or adviser, and that you control the data that you put into them.
Okay? So if you train them on your preferences and or you train them as an employee on a process or a technique, the data that delivered to that AI is not going off to the cloud. It's not being sucked up to the the mothership to be used. And that, that AI, it won't be flat. It will have a personality, and that, it will have a personality that fits the role that they're in, that you like as as the person who's working with them, and that, it will grow and improve itself based on your interactions and will remember those previous interactions. And, the best way to do that, of course, was open source because you you you control the whole process.
So it's these social AIs build trust at the individual level with the other employees, human in particular. You know, they build trust with that human worker that they're working with. They handle complex tasks like they do. It's a partnership. It's not a, you know, not running a spreadsheet or something like that. It's not like an automated process in that regard. Should we trust them?
[00:54:32] Shawn Yeager:
If Or are we given no choice?
[00:54:34] John Robb:
Well, you you if you wanna compete, sure. I mean, you know, you have to trust them. But you you should only trust them if you control it or have a have a equity stake in the data that you're providing. I mean, if I'm a small company owner, I wanna know open source. I want my data sequestered. Because if I create something cool or do something proprietary, I want that to remain within my company. Okay? If I have virtual employees and human employees, I want that knowledge and and and insight contained, to producing you know, directed to producing value that I can I can acquire? Absolutely.
Personality aspect's really big in per in terms of providing, you know, building trust. So, I mean, we have all of these ways of building trust with other human workers. Right? Building cohesive teams with other human workers, evaluating performance of other human workers or of human workers in general, what a successful training effort looks like, what all of that onboarding, all that slot is already there for doing for producing, someone that will do the job that's required. The virtual work of this from the social AI fits into that slot. It's a natural fit.
And, so you have to have that personality. You have to have that vector of improvement. You have to have that, you know, interface that allows, you to interact with that slot. You know, agents won't do it. Current, like, dry Siri type things won't do it. You know, it's too generic. It's too flat. Those mind models, you don't want we're not talking about an AI brainiac. No. We're talking these AIs only have to be as smart as the the person that would be normally filling that role, which we've already achieved for many in many ways and nice, cooperative, willing to stand their ground on certain issues, all those things that you would want out of an employee.
And we know what those are. And it can change based on the type of company, and you can you can train for that. You can push for that. So, yeah, to the extent that you don't let the mother ship suck it all up, you're better off. And the same is gonna be with robotics. You know, robotic and AI is gonna you're gonna see robots everywhere in ten years. They'll all be AI run. So you have to have physical workers as well as virtual workers. It's the same thing. But you don't want it sucked up to a big company. That data has to remain with you or you're lost.
Somebody's gotta tap into it, that latent capability, and put you out of a out of a business really quick. So,
[00:57:15] Shawn Yeager:
I find that that is, as as often as the case, these things are invigorating and unsettling in equal measure. And and on that, note, perhaps on some time in. Hopefully no. Hopefully, invigorating. I would love to close it up, John. And, again, thanks for your time. Yeah. What what is what is one trend vector point on the horizon that you see that you think is going underappreciated
[00:57:44] John Robb:
that we should all be paying attention to? Well, the big rollout of social AI is gonna catch everyone by surprise.
[00:57:50] Shawn Yeager:
Define that for me, please. Well, it's like
[00:57:55] John Robb:
everyone's focused on, you know, cognitive, capabilities, you know, pushing the border. Can they do can they solve can you know, cure cancer kind of thing? You know, kind of pushing up to the limits of, you know, what they can do individual. The companies and and the individuals are working on trying to make socialize the voice quality, you know, the the emotional content of that, the, the, visuals that will be, you know, folded into augmented reality, the, personality. Okay. I see people working on personality, but it's, like, way out out of the mainstream, and and remembering past interactions and trying to get the that memory more cohesive over time. So you're probably running multiple in parallel LLMs, right, to do that. So they can check each other on the responses based on the previous interactions in history. And there's other kind of parallelisms that probably work. So those people are all often by the land right now, but they're gonna become the most dominant. So they're gonna catch a lot of people by surprise.
And the first ones who employ it successfully are gonna make some amazing businesses. So, that's gonna be a surprise. I I don't think anyone's really care you know, set up to handle what's gonna happen with fertility crisis because when when we finally figured out fifteen years from now and and everyone's focused on it, being because it's global, you can't you know, immigration won't solve it. Nothing will solve it. So, you're gonna have to figure out a way to keep on growing the economy without people. My personal solution to that, and this thing would be the the ultimate kind of kicker, is that, at some point, we'll probably figure out a way to handle the collapse of population associated with the fertility crisis, and it'll probably be something that we hate right now, like clonings and and things like that. Just horrible kind of institutional kind of things or technology driven things.
But how you grow an economy without people is that you give these social AIs the ability to buy as well as produce, purchase. They have a a vector of improvement of of where they wanna go and what they wanna do, and their interest will associate with that. As you build that personality out, they can become consumers. And if you have a billion virtual and, robotic workers in your economy in fifteen years, which is probably a low figure if this thing start really zooming. Because but we went from in 02/2007, there were zero smartphones. Okay?
In 02/2024, there were 5,300,000,000 smartphone users. We rewired the whole world in that short period of time. We could we will have that many virtual workers, if not more, probably closer to trillions. If they start buying it, the country that figures that out and gives them the level of autonomy necessary to buy and things purchase things, become consumers as well as producers, some level of, you know, personal autonomy is going to have the biggest economy in the world by far. Everyone else would be standing still thinking they're gonna use these AIs as, you know, virtual slaves. Yeah. And they'll always be producing and never taking. But the one that figures out the purchasing cycle would just walk away. I mean, you could see takes the rest of the world ten years, fifteen years to catch up. Well, well, it's gonna be too late because that that first economy will zoom so fast. It'll change everything.
[01:01:24] Shawn Yeager:
Fascinating. I did not see that coming. I'm I'm thinking now I'm thinking of share of wallet. You know, what does that look like when you've got billions of of AIs and agents and
[01:01:33] John Robb:
But even you know, a lot of people think this is the the cognitive problem I've seen with people even with, you know, people who understand economics and finance and stuff, is they tend to think of their personal wealth and ability to aggregate personal wealth. And they don't see any value in making sure that all boats rise. And they don't see that they would be much better off and richer than they are in the in the most acquisitive kind of scenario that we can imagine if everyone got richer and became, more prosperous.
So, because the economy would be so much bigger. The the whole environment would be so much wealthier and so much productive and so much more rich in technologies and everything else is that that, you know, level would be so much higher that you're starting from and that compared to where you were if you had kid a few people that were very inquisitive and locked it down. It took the majority of what was going on. So they can they couldn't make the cognitive leap between that kind of slow, targeted environment and this dynamic massive environment. Another thing is that the other thing I failed at was in 02/2010 when I came up with a kind of a autonomous corporation concept. It's a couple years before they had these, DAOs.
Yeah. Wait. It was, like, three years before they came out with Dows. Right? And Bitcoin was just started. We're trying to figure out a dollar. Right? It was a dollar at the time. How to incorporate it into this, but it was just too early days. And the idea was that we were gonna, come up with a company written entirely as software that would pay everybody who contributed to that economy because it was open source, open dynamic where people come in, do a a piece of work that was measurable by the software and become get paid a certain amount for it, potentially, or just earn equity in that company and all future earnings, like a kind of an annuity in the in the, value that was created. And that, I came up with this idea that it would be, acquiring, like, a metadata.
So take a picture of every environment, every every object, and then label it. Because, eventually, we're gonna be using this in AIs, and this data repository would be the biggest data repository in the world for object recognition and that metadata and all sorts of other things that we could have individuals contributing to. And as they accessed it for training these models, making sure that they're not making virtual copies of it so they could have infinite training sessions, is that, everyone will get paid for that, and that the value that would be created could be immense. That so for that original hour of work that you made $2 on could become worth hundreds of dollars downstream.
And it'd be paid constantly as they grew as the number of participants was you know, kept on growing. Of course, it didn't work because there was too many prerequisites for that, like a functional Bitcoin and other things like that. But while you invented Pokemon Go, among other things. Yeah. Well yeah. But the thing is that that kind of company could have created the ecosystem necessary to kind of challenge it. So if you have this kind of alternative things, they can grow really quickly in this kind of network environment. Just like how quickly network politics have evolved and changed and and and and, the dynamism is is is shifting in in unexpected ways, we could see that's with the economic system as well.
I always thought that, you know, crypto would be better off within the context of an equity, and and corporate kind of context, help, you know, alleviate some of the elements associated with with trust at scale.
[01:05:24] Shawn Yeager:
And I think that yeah. And and and there's a lot of that emerging. So wow. Fascinating, John. I know Global Gorillas is one of your primary focuses and projects. I'll be sure that everyone gets a pointer to your Substack. As I say, I've really, really enjoyed it. I will, I do recommend everybody check it out so we can, keep getting these glimpses of of what's coming in the future. Thanks so much for your time today, John. Oh, you're welcome. Really appreciate it. Talk soon. Yep. Bye bye. Yep.
John Robb, thanks for joining. Oh, my pleasure. Thanks, John. I I appreciate you taking the time. As I had mentioned before we got started, not to be too gratuitous, but I followed your work since I read Brave New War in I was trying to find my copy, 02/2009, '2 thousand '10, fascinating read. And I think, in in my view, was one of the first bold sort of reframings of open source. And what does that do to the asymmetry of power? And what does that do to enable actors who, you know, may turn some of these open technologies against against modern sort of society and culture. But but I think, you know, with your recent writing, I have I have really, really enjoyed network swarms and the ideas that you're putting forward, and I certainly wanna give you an opportunity to to frame that. I mean, I think what, I think is interesting for for the audience, for the viewers, and and they'll see this in the show notes is, correct me if I'm wrong, John, but you got your start at the US Air Force Academy.
You went in then to US Special Ops Command, then decided Yale was the right follow-up after after special operations, which I think is fascinating. And then I presume took a lot of that and synthesized it into your role as an analyst at Forrester, where you were tracking some of the very earliest Internet technologies and then founded Gomez and had a very significant exit. Was it just, just shy of $300,000,000 to, CompuWare, of course, at that time was a was a massive player. And the Easter egg for me, I have to say, was discovering that you were CEO of Userland Software, which I just think was one of the most delightful companies. And, you know, for those who may not know Userland, they certainly know RSS. You know, if if they're if they're techie, as I think a lot of a lot of listeners will be.
[00:02:02] John Robb:
And so a brief aside, tell me about Userland software. How was that? Yeah. So I was at Forrester in in '95, and I got pulled into the Internet. They didn't have anyone covering it at the time, and I was a new guy. You know, I'd just been doing tier one special apps and went to Yale to kinda fill in the gaps in my business knowledge. And, I was showing that all these new technologies, Internet technologies to everybody in the company, and they said, okay. Why don't you start that service? And it went like it was, like, 25% of the revenue after I launched it and started writing these big picture reports. I had a kind of a knack for it. And, one of the reports I wrote in '96 was called, personal broadcast networks.
This idea that we would be posting things that people would be subscribing to them, and then we could create all these personal publish subscribe networks, that would constantly keep us up to date on all these other individuals. And, everyone at Microsoft and Netscape and others came to me and said, how do we how do we build this? And I go, I don't know yet. But it feels like that was the fact that Microsoft on the browser team at that time, trying to get into the trying to get into the, you know, the right quadrant. So please go ahead. Yeah. Mike Homer over when he was, before he died was over at Netscape, and, he was head of marketing there. And he came to see he was kinda like, how do we do this? But, I, went and did Gomez, and we did that exit. I think it was, like, sold for 295,000,000 was was the exit on that. And then, in 02/2001, I, you know, I had something else that I wanted to do, and I I wanted to see if I could make this idea real.
It was basically social networking. And, in 02/2001, I saw a company that was doing a lot of that called Userland. And, Dave Weiner, who was the chief programmer there, and, he needed somebody to run his company. And I came in and ran it, and it was like, you know, three, four person affair. Robert Scoble was our marketing for a while. Punching way above its weight, though. Yeah. And, we built, basically, social networking as we know it today. We we published RSS as an open source standard, and, we created a tool that you could subscribe to RSS feeds. And the feed on our tool, we called it radio. It's a desktop tool. I mean, it looks exactly with the same kind of constraint on characters that you get in Twitter. So it looked exactly like a Twitter feed. And the publishing, you could take it from Twitter, that Twitter kind of interface, and then publish it into a weblog. And we established a lot of the early standards for how weblogs are published, reverse chronicle chronological order, you know, with time stamps.
You could take what other people you you subscribe to and then post more on it. So and it was actually blue. It looked exactly like Facebook. So, that was the default color. I mean, it's exact the Facebook color. So we did that in 02/2001, and, it was too early. I got a I got experience trying to get people to see an emerging technology from the Ground Floor. And, it was excruciating. You know, I I saw the things that I told people how to use it and how it could be applied. Spent a couple hours with Ray Ozzie before he took over Microsoft for a while. Wow. And he was trying to figure out how to if this is something they wanted to employ, that could have put them in amazing position if he if he had taken that in. But I guess I wasn't convincing enough. And, we got New York Times to sign on to do an RSS feed and, as one of our exclusive feeds for for the radio tool. And once they were on, all these other places going on, NPR and everyone else, just wasn't enough.
We just ran out of steam, and, I left before Facebook and Twitter took off. So, went on to do other things. It just, it was it was very excruciating to go through. It it it's hard. People have cognitive filters. They just can't see it. They just couldn't see the value, and the dynamism. And I just saw it. Like, it was clear as day, and I thought it just couldn't communicate that well enough, I guess. But, anyways, they say, my experience early being early is,
[00:06:15] Shawn Yeager:
an even greater or as great a a source of failure as being wrong in startups. And so, it is tremendous. I mean, I I do though you know, again, as I said, it delights me to to to find that you were were running Goosierland. I my Mac two SI, you know, I think I remember is what I had or a Quadra nine fifty or whatever the details are are not are not, important, but I distinctly remember that application, that software. And, it has to be It was it was a cool experience. Yeah. And, I mean, you know, in doing that, and I think, to to connect the dots, I raise that because that was an extremely early experiment in I don't wanna strain the word decentralized, but this was was pushing, you know, a lot of this to the edge or to the end, to the end user.
And so, you know, we fast forward now twenty five plus years later. I'd I'd love to to get into, John, your framing of trust. And, again, I'll I'll sort of say because you have worked in special forces, because you have, built and sold a business, you you've got two, very, very different perspectives. How do you view trust? I know you thought a lot about this. Mhmm. I mean, trust
[00:07:29] John Robb:
is a minimum requirement for cohesion. Okay? And cohesion is a minimum requirement for high quality decision making. And, cohesion allows you to view the other participants in your decision making process as, friendly to that process, that they're not trying to sabotage it. It also works at a at a national level, with political discussion. So if you see, somebody with a different opinion that's an opponent, you can argue that and you could but you fundamentally trust them in the sense that they're working for the good of the nation to just disagree. But, I mean, the current environment, we see many of the people on the other side of the political spectrum as enemies.
They're outside our tribe. And that tribalism, it would you know, I dug into kind of find the kind of origins of trust is that we were wired at a biological and neuro level for tribal life. And, life inside a tribe was a high trust environment. You knew everybody there. You were reliant on everybody there to get through the next day, the next year. Their success made you more successful. It was largely a gifting culture and the culture where you gave things and, the best better the gift that you the more status you had. And then we broke with the tribal cult culture and started to scale our society, adding more people. And that tribal connection, knowing there was other people, wasn't sufficient then. And, we came up with kind of acts or work around trying to make that trust possible, and we came up with a legal system. We came up with bureaucracy, bureaucracy being a formalized way of getting things done that operated on a civil code, and there was a certain level of professionalism, associated with it. And so you could trust that its output would be high quality. And then, we replicated tribalism in the sense by, coming up with nationalism.
And nationalism gave us that cohesion at the national or nation state level, that allowed us to operate as a cohesive unit. And the marketplaces, of course, you know, they have to be fair. You have to make sure that they're not being corrupted or gained or or, in, you know, that kind of need for, you know, making that process transparent, is, you know, behind many of the rules that we see in terms of financial disclosures and things like that, and how contracts work. So we all of these things that we have are you know, that we rely on now are workarounds. Right?
So, you know, that's kind of my my thinking on this is that can we use current and and currently being developed or, technologies to find better ways to do that? Okay?
[00:10:28] Shawn Yeager:
Improve the quality of trust. That's that's very, very helpful. And I think, you know, it is interesting to to look at all of our technologies as enamored as some, myself included, may get with them as attempts to scale trust, attempts to get beyond Dunbar's number, attempts to replicate the drive, as you say. Well, on that note, your work has, for a long time, highlighted, highlighted how centralized systems are prone to collapse under pressure. And you've been writing quite a bit and I know engaged in in many, energetic discussions on x, on Twitter, about AI. How is AI, in your view, making these weak spots harder to ignore?
And and what's the clearest sign that we're trusting these centralized systems too much?
[00:11:19] John Robb:
Yeah. I mean, a centralized system, like, the big tech companies are currently running with social networking and other things. And, you know, social networking now is upstream of the, legacy news system and how we actually manage information transfer now and process information. Problem with that is that, it can be disrupted, and it's easily disrupted from the dot from underneath, but it can also be it could another failure mode is that it can be locked out. Right? And, one of my big worries back in 02/2017, was that, we were headed towards a potential lockdown of the information ecosystem, and that, the tendency of I I I I call our, political factions blue network, red network. The tendency of the blue network after it, you know, kicked Trump out of office, in 2020 was to lock down these systems to try to minimize the ability of the opposition to say anything, anything that wasn't approved. And, they got support across the industry.
Categorized all violations of these rules as as kind of evil and things that cause bans and stuff like that. I'm not trying to, you know, paint them as bad guys, but they saw it as a need for stability. And this was it needed to eliminate misinformation and disinformation. And, the rules were getting applied in more and more instances, both at the technology level with the the big companies as well as at the governmental level and and and different organizations. And, we just I saw I've been writing about AI during the same period since 02/2017, how it would actually work at at you know, when employed. So I was ahead of the game on this, and I saw AI just about to hit.
And that was my big fear was that AI would be employed as a means of, locking down the entire system, getting to every conversation, managing those conversations, and then controlling them and redirecting them and, you know, not just simple censorship, but even more. And, once that happened, you know, billions of people in real time could be managed. All the topics that were, available to think about and talk about would be would shrink down to a narrow orthodoxy, a narrow, group of of approved topics. The And then all the window. Yeah. And then well, yeah. Not just over to everything that could potentially challenge, stability would be wiped out, and that means all innovation dies. Your ability to actually push back against the establishment would or, you know, how things were operating dies your ability to adapt to changing conditions and everything else would die.
And, I mean, from perspective of of somebody who studied, like, totalitarian states, I mean, it was like, this was these corporations were building essentially like, totalitarian state in a box because all you need to do is turn it on, and they could do it cheaply. They didn't need, you know, massive bureaucracies of of people like they had the Stasi had of going through everybody's emails by hand and and and and that kind of thing. It could all be automated. And, that ended with with Musk taking over Twitter and unwinding and providing an alternative kind of information conduit that broke open the system. And and you're on your side. Is it is it that simple?
[00:15:00] Shawn Yeager:
Was was his buying Twitter and taking over Twitter that momentous?
[00:15:05] John Robb:
%. Trump never would have won. I mean, he didn't win in 2020, not because of election fraud or anything. It's because social networking locked him down. His ability to route around the media went to zero. It, you know, it was shut down on Facebook. It was shut down on Twitter. It was shut down on, all the other places that you would normally get information. And his supporters were sent to the boonies. And we would have seen a repeat of that in 2024 if not for Musk's acquisition of of of the company. I think from reading his interviews that he was motivated for much the same thing I had, had seen, and he saw it too.
And he was worried. And and the trigger for him and the trigger for me that this was potentially very dangerous was the response to Ukraine. I mean, we've been playing back and forth with with Russia, I think, with the mistaken policy of NATO expansion trying to turn a newly democratic company our country into an enemy. And the guy who came up with containment, George Kennan, said it was a bad idea. But nonetheless, it was a back and forth, and and Russia was pushing back against it as we pushed up against their border. And and, there'd been other conflicts in Ukraine and in in other places, and we turned this into using you know, the network turned it into a new cold war and pushed us up towards a nuclear confrontation where it didn't, you know, exist.
I mean, you could travel to Moscow and and take a vacation two years a year prior to that war, and and all of a sudden, they were being disconnected by millions of people, you know, across government. And and people inside government, inside corporations were taking action on their own to push us towards cold war, towards confrontation.
[00:16:54] Shawn Yeager:
Yeah. It fits And this is preemptive and overcompliance. I mean, I won't name a particular company, but I was with one that aggressively, aggressively overcomplied in my view. And so to your point Yeah. Yeah. They reframed the whole conflict from,
[00:17:10] John Robb:
you know, regional war and, you know, back and forth that we would normally negotiate our way out of and and and deal with into, they just escalated it. Like, this is like a new Hitler emerging or anyone who said anything else would be swarmed and, stomped on. And, that swarming behavior, actually deciding war you know, whether we're at war or peace was kind of scary to me. It was like it and it was unreasonable. It it you know, the way the swarm operated and, was that, it saw the enemy as, as unconstrained evil. And Putin was unconstrained evil because of his support for Trump, you know, which is the seed of it, interviewed in our elections, and that, he be instantly became Hitler, and therefore, we had to fight him. And, yeah, no. It was a it was a it was a scary moment. You couldn't there was no nuance, you know, to the whole position of the swarm. There was no negotiation.
It was only total victory, collapse of Russia, that kind of thing. So I don't wanna go off on the Ukraine thing, but the, you know, the swarming behavior was was, you know, scary to me. And then it you know, we've seen swarms with, George Floyd and other places that they it can't be reasoned with. And it distorts the truth, and it can't
[00:18:33] Shawn Yeager:
it doesn't stop when it should. So, I think that's what's so powerful about that that that word that and and what it conjures as compared, say, to a smart mob, a term, you know, used, I don't know, fifteen, twenty years ago maybe, is the swarm is, as you said, it it doesn't stop until it has, you know, complete victory. And and I'm curious, John, there are, you know, no end of of conspiracy theories. But if if we if we take a I take a naive approach and I look at incentives, what incentivized the emergence of the red and blue swarm and the all consuming total victory?
Are these emergent behaviors or or was you know, was and is there was was and is there an incentive that drives this? Well,
[00:19:25] John Robb:
I've been tracking, networked organizations and their emergence, and I I saw these networked organizations as, potentially another layer of societal decision making. So we had tribal, which is nationalism in our current context. We had bureaucratic, which was this kind of professional system for mobilizing resources and analyzing information and and, managing people at at scale. And then you had markets for discovering information, for, allocating resources based on need. And, we combine them in our system of governance by using markets for elections and to elect the leader of the bureaucracy and, that, we use nice nationalism as a way to motivate people to point in the right direction, orient themselves towards, you know, advancing the good of the nation, and then cooperate and provide cohesion in our decision making as a whole. So we combine those into a useful whole. And all of those developed with the advent of the printing press, and it took five hundred years to roll out. And it was a bloody, horrible process of creating these organizations, creating these, means of of of deciding what to do next, and that we arrived on this. And now here comes something new, you know, this networked layer, networking layer, you know, for decision making.
We saw it in the surge. We saw it in protest, like Egypt and other places, Tunisia. And then we now see it emerge in politics. And in terms of the red and blue, it emerged in the, you know, the red network, that supported Trump that tried to get him you know, got him into office the first time in 2016, came out of the blue. Okay? And it became out of the blue because he was challenging the globalist orientation of the current administration and and establishment. And, you know, that globalist orientation is, like, we made all our decisions as a country pointing towards globalism, and that we minimize nationalism and minimize the the kind of system that we were running during the Cold War where we had to have a strong nation state that was economically independent, that had an eye on prosperity and broad broad prosperity for the for everybody was focused on, national defense, you know, rather than defending the world. And, the we shifted at the end of the cold war towards a globalist orientation, and all the policies increasing the world had flat people and everyone else, you know, started thinking that everything you know, we didn't need any trade barriers or any kind of economic independence. We didn't need to worry about prosperity. We could open up all of them every American to competition in the rest of the world because that'll make them stronger, and that, we didn't deliver on prosperity anymore. And we we started deemphasizing nationalism. And the elites on the whole, from what I saw, and I, you know, interact with a ton of them, is that they started to see themselves more and more as globalized, you know, global people.
They weren't nationalist anymore. And that that weakened the nation state, and we ended up with a a a country that didn't really have control over its borders anymore, didn't have a control over its, the messaging anymore, you know, the kind of narrative that kept us together and helped us move forward. It was starting to get really messy as the Internet started to eat away at it. And, we didn't have control over economics and finances because we globalized, and and we were making mistake after mistake, like, for instance, February bringing China into the WTO because we thought it would democratize them.
And then instead, what we ended up doing is creating, an opponent, a competitor. We grew them because we didn't go after fair trade. We just opened our borders. It opened our our trading system to them. So, people are kind of fed up with that if you said because, you know, what the establishment was saying is that whenever you mentioned, focusing on US defense rather than, you know, policing the world with military force. Lots of useless wars and messed up in the rest, as we saw in Iraq and and toppling governments that caused more misery and than they saved us from, like Syria and Libya and other places, the bad of those dictators where they weren't as nearly as bad as the stuff that came after. We created ISIS because of that too. So it's like, yeah, we made tons of mistakes. Is that if you said you wanna focus on US defense, say you're, an isolationist, and you'd be dismissed from the conversation completely.
If you say you wanted fair trade, they'd call you a protectionist. You said you want self reliance, you're a protectionist. You were dismissed. And if you wanted a a good, you know, control over immigration and borders, you were racist, or you were a protectionist because you're not letting these, you know, this flow happen. So, that restriction of the dialogue and the increasing number of failures we saw, like, with the financial crisis and the failure to, you know, punish anyone for that and and everything else created this great well of discontent with the globalist orientation.
And, Trump tapped into it, and it put him into the White House because they knew that he would disrupt things. They weren't electing him as a candidate. They ran a open source insurgency. It was dynamic. It wasn't run by him. You know, it was run by places like The Donald on on Reddit. You know, I was interviewed there, like, a week, week before Trump. So it's like, okay. How did this work? And I explained how it was working for them. And in places like, you know, even, like, anonymous and stuff took a lot of my writing, and that was the founders used that to to found it. And, this kind of open source opposition to the establishment emerged. And it, was our first taste of of kind of network politics.
[00:25:23] Shawn Yeager:
And so these swarms, it sounds like you emerged.
[00:25:26] John Robb:
Swarm was kind of a later thing. I don't it's more of an uncertainty. Open source is surgery works like, very much like open source software. So it has a single single goal that kind of unites everybody. And they have all different reasons for motivations for joining it. I mean, there's no barriers to entry, so they can join. They can contribute. And if the contribution works in terms of, you know, memes or or some kind of information attack or or defense and it works, people copy it. And then Trump was very good at kind of interacting with that and pulling that out. And that, it's very innovative. I mean, you know, open source insurgency, open source politics, it was extraordinarily innovative, but it had one fatal flaw is that once you achieve the objective, the open source organization falls apart.
Okay? I mean, no one can agree exactly what he should do once we got this this grenade into office. And, it fell apart, and he was basically left there alone for four years to basically, you know, swim against the a tidal wave of opposition from the establishment. And here we are go ahead, please. No. Part of that opposition was the formation of a blue network. And the blue network worked different than the open source insurgency. It waged more warfare. And the moral warfare, it's often a feature of guerrilla warfare, and that, what you wanna do is you wanna make the opponent look, immoral and not worth not not worthy of any legitimacy, shouldn't shouldn't be in office. And they founded that based on a, opposition to known evils, racism, sexism, colonialism, antisemitism, that kind of stuff, all those things that they hated.
And, they formed a kind of tribal trust layer based on that. So they, if you're opposed to these, then you are part of their tribe. And anyone who sees examples of those, you would see you would have empathy evoked from those examples, and it that bonded you kind of a kind of a, network tribal kinship with these other players. So if you saw George Floyd, for instance, with the neon in his neck and you hated the officer for doing that, you your outrage would bind you with not just George Floyd, but everyone else was outraged by it. And your enemy would, you know, your enemy would be the police officer, and everyone else's enemies would be the police officer in the system, that he represented.
So, instead of having a wild number many, many different kind of opinions on how things should work and attacks from everywhere, a kind of a maneuver warfare. Within the blue tribe, it was a cohesive pattern of, morality. So they parsed all the news at a kind of grand, co curated scale where everyone took every bit of news and spun it. You know? And we live in a kind of a packetized news environment, and everything's broken down to very small bits. And everything was actually parsed and and spun and put into a context that. So if you saw x happen, then you would say, okay. This happened because of racism or this happened because of sexism or this you know, the response is colonialist. Their back their response is anti Semitic. And that all happened in real time, and and it created this huge pattern that everyone was supportive of. And it grew large enough, that it had influence and it it aligned corporations and government with that pattern.
And that was used as as a means of enforcement for limiting the conversation on these platforms.
[00:29:15] Shawn Yeager:
And so it sounds like if I if I hear you, this is fascinating. The blue swarm learned, in effect, from the failure of the red swarm in that, as you said, once it achieved its goal, it dissolved. There was no cohesion. And so in in filtering everything, as, you know, either for or anti the swarm's identity, then it it overcomes the short of the version of the swarm that came before it. And then it sounds like connecting the dots, Twitter, Meta, all the companies that that enacted and conducted all of this, in my view, egregious, censorship and other behavior, what did they see, and what were they capitalizing on? Okay. So
[00:30:03] John Robb:
the Boostworm drew its members from academia, from government, you know, from, big corporations, from the tech crowd. And, they formed a cohesive unit. They created a a kind of a reverse tribalism. So tribalism, the oldest way, of course, to create trust, like I said at the start, is that it's usually formed based on a positive narrative, why we came together, why what what we've overcome and where we're going in the future. And, that creates a kind of fictive blood kinship, meaning that you're not actually related to these people directly. They're not part of your immediate family or extended family, but you are they're members of your tribe and you are connected with them at a deep level. Instead of positive narratives, they worked on negative narratives. It is what you were against. Right? So it created a very aggressive combative kind of mindset.
And it also had a big flaw. So when the Blue Network won the White House with a guy that didn't even campaign I mean, Biden mailed it in. I mean, he didn't do anything, and he was he was brought into the White House, based on the Blue network's kind of coercion of these big companies into, you know, strict alignment and employ you know, alignment with their view of the world. They started, you know, really pushing to try to squeeze out the political opposition completely. And, they also had an inability to police excessive use of their, way of looking at the world.
They couldn't say no. When somebody applied that morality, that moral structure saying, you know, any opposition to what I do, if it's in opposition to this, you know, prohibited activity, this evil activity, it's okay. And so, you ended up, normally, it would be like, okay. I'm I'm just take trans for instance, putting men in sports. And, you know, it's whether you agree or not on agree on the issue, it it was highly unpopular. And, surgeries for kids, again, lightning rod. It proved unpopular even though they pushed it out and it was being rolled out at a at a mass scale. Not to pick on trends or anything, but that was just, like, example of its inability to actually police the excesses that would cause, you know, widespread resentment, DEI policies and things like that where, you know, hiring of, you know, certain subgroups were was prohibited in favor of other groups. And, of course, that created more and more resentment.
It wasn't rolled out with any kind of nuance and subtlety and insight into, the viability of it. That generated all this negative resentment, and it was unlocked with Musk acquiring Twitter and turning it into x. And, it was then marshaled by instead of a bunch you know, millions of individuals like an open source insurgency in in in 2016, it was, marshaled by these bigger professional accounts. Hundreds of these accounts, millions of hundreds of millions of followers collectively, like a Musk with a couple hundred million followers. And and, they challenged establishment narratives, basically, blue network narratives, question DEI, question immigration, question this and this and this. It started poking holes in their in their narratives, and, people followed it. People signed on to it and, revitalized Red Network emerged. And, it had you know, the Blue Network was, you know, being taken apart piece by piece by piece, by these, you know, Red Network accounts.
And, it was increasingly hard for them to actually, you know, get their message across, and they lost the White House. I mean, I guess maybe the final step on on them falling apart is one of the founding the Gaza incident caused a huge rift is that one of the founding, factions within the blue network is this opposition to antisemitism that ran a foul of those of, you know, opposing colonialism, fascism, and and racism. And it was irreconcilable, and that, that split the network. And what I usually, what do you call is a noncooperative centers of gravity, is that when waging war and warfare, if if you want to break the opponent into noncooperative centers of gravity where there are different groups within the opposition that would fight each other over moral issues.
I don't trust you. I don't. That kind of thing. So, that was the final nail. Red network won. Put Trump in the White House, but it didn't go away this time. The big accounts took over, and, they served as the, most of the cabinet positions. They were the ones that were selecting a lot of the people that were going into the senior positions throughout government. You know, the deputies and other things behind you know, below, they were putting their people in. A lot of those people were tech people, and they did that. And they ran with their narratives. I mean, you know, Musk decided, okay. I'm gonna push cutting the budget. He came up with Doge, and he got the green light to do that. Started to trying to account for where all this money was going.
Then they, you know, it has more sustainability than the blue network, but I think it has many since it's really kind of early, version of network decision making, like the Blue network was in this last iteration, is that it have flaws in its process that will eventually cause it to blow up in the same way the Blue network blew up. Because right now, the Blue network, for all intents and purposes, doesn't exist as it doesn't really have any weight anymore. It's gotta reform and reformat itself, and it will. And because that moralism is actually quite useful in decision making, because it sets boundaries and standards, but maybe not as maybe a minimalist approach that might be the better way to get it to grow faster and become more permanent. But, yeah, the red network that's in control right now will
[00:36:19] Shawn Yeager:
Probably suffer the same as well.
[00:36:21] John Robb:
Right. And but just like the evolutionary process, it dies. It comes back better, more evolved. Hopefully, not we don't end up with a version of network decision making, network politics that is truly disastrous because we could. It you know, it's so strong. I mean, look how it taken less than ten years before it took over the the whole political system. And that that to me is is what's so
[00:36:49] Shawn Yeager:
awe inspiring and terrifying about it is the ability to simply be washed downstream, you know, and or or carried off with the swarm, whatever metaphor we choose. And and I think, you know, with that as the backdrop, here we are in just a few years. We have the ability to manipulate information using LLMs, AI, deepfakes are trivial, automated propaganda. You know, it has to be rewiring how we decide. It is rewiring how we decide what's real. How do you see this shifting the way we choose to trust institutions and each other? Yeah.
[00:37:26] John Robb:
Well, having having two networks looking at the news tried tends to keep it more honest. So, you'll have a a a built in, questioning of of of a piece of news or or an item or event that favors the other side, particularly if it can be manipulated. And there are people out, you know, constantly looking at that, looking at for fabrication and misinformation, and they'll call people out on it. So I don't see disinformation or misinformation as a problem to the extent that we had prior to, during the six years or seven years prior to this last election. And everyone was crying about it and and mad about it. But that was largely just an excuse for cracking down on stuff and controlling stuff and controlling political outcomes using social networking and and information flow. So,
[00:38:18] Shawn Yeager:
and do you do you see and I and I hear you and and that, you know, that's what's echoing in the back of my head. It seems to me the very boogeyman that was called out, as you said, four or five years ago is now in some of these deepfakes or just simply AI generated video indistinguishable or rapidly approaching six months, twelve months indistinguishable from reality. So do do you discern, or or would you still say that it it is not a a a serious concern?
[00:38:48] John Robb:
I don't see the shared information environment, under that much threat from that kind of attack or that kind of, development. I see it more of a a threat at the individual level because we're really, really close to AI generated AR augmented reality. And, when that hits when that hits, it becomes easy where you're immersed in a visual and auditory, environment that's completely AI generated. The disconnection that will happen at the individual level is insane. I mean, we're already, you know, suffering kind of a sidal collapse at one level. I mean, you know, I wrote a report on the fertility collapse at the global level that just, this year. It was the first year that global fertility fell below replacement.
And the information is just you know, the numbers are just catching up with that, you know, effect, but the and it's falling really fast. Make it get down to, say, one point point six, I think it was, the the lowest level, I saw in in, South Korea, the most wired nation on Earth. Right? Point six is one child for every four women. That's the way to kind of totally collapse the global population, and no one seems to be able to do anything. All that It's too complex already, because there's so many factors influencing it, and it's obviously shown a sign of deep dysfunction and how we our society is functioning. And now we're about to throw this AR on top of it where you could have the most trusted people in your life are not even real.
Right? I mean, you could see them and hear them, and they'll be with you constantly, and they'll talk to you, not only, you know, just confidants and and and, advisors and educators and and, you know, romantic, you know, connections. These people are gonna be these fake people are gonna be these fake worlds that are created are gonna be just so much better than real life. And without without moralizing
[00:40:50] Shawn Yeager:
as to whether that's good, bad, or neutral, what do you think is necessary for individuals? You know, I think about still and and certainly historically the gap, at least in The US, as to media literacy and the ability to critically think about media. And now how does one critically think about this this immersive bubble that they live in where everything reinforces their views? You know, do do you have a position, position, John, on what's necessary to to get through this next phase?
[00:41:23] John Robb:
Oh, well, reality distortion field. Yes. Well, there's a kind of political tribalism, you know, as part of that solution, this network layer on how we think about politics and how we parse it at the at if if you want other people to talk to, you have to, you know, pick a fact pick a a tribe, in a red or blue
[00:41:41] Shawn Yeager:
network. Right? And, And two is sufficient, you think? Well, two
[00:41:47] John Robb:
But other one Seems to work within our context of our society. I I think what happens is if you get below two or you go to three or you go to four, is that the dominant party will walk away with it. It's kind of like that Metcalfe's law. Right? I think, You know, a a network that's, you know, 50% larger isn't just 50% valuable. It's many times more valuable, and it tends to concentrate power in the hands of that that larger network. And, I mean, I I think this the reason why I like the idea of having more information flow, even if it's painful, even if it's largely wrong, even if you have flat out earthers out there, you know, arguing nonsense, or people coming up with incredibly stupid things and pushing them, is that you have to have that openness in order to solve complex challenges.
We live in a complex world now, not a complicated world. Complicated world is, you know, something you could solve using bureaucracy and planning. In a complex world, you you get hit with all sorts of things that you don't have a clue how to solve. Okay? You actually just have to throw things at it, ideas, and see what works. And then when you find the thing that works, then you reinforce it. And, you can't solve those problems if you have a, you know, structured information environment where everything's, like, limited. You have to have a free flow. You don't know where the new idea that's gonna save everybody will come from. So,
[00:43:20] Shawn Yeager:
And that's a that's a great, I think, pivot to or or or shift to we've talked quite a bit about the individual. On that note, you know, you you've warned about AI throwing supply chains and brands into chaos, and and I think we can extrapolate from our conversation so far. But to to go a bit deeper, what can organizations do to become more resilient to these changes? If if if the consumer, you know, from the standpoint of a of a business or a brand, are these highly tribal, swarm oriented, AR immersed, you know, individuals,
[00:44:00] John Robb:
how do organizations need to think about that? Yeah. It's it's gonna be a problem for organizations because I I did a report on the, you know, the global, Edelman's got a global trust survey. Yes. Familiar. You know, Microsoft sold a PR agency. Absolutely. Yeah. And so they found that in every country, every developed country, corporations are more trusted than they go. Every a long shot. And that's a recent, is it? Yeah. I have a report on that. So you can dig back in the in the in the substack and pull that up. I will. And it's basically saying that, people wanted corporations to do what the government isn't doing and take control and that people would not associate with a company that was aligned with one of the opposing political factions.
But they wanted them to get into involved anyway. So, it was just a weird thing is that, okay. Here, they want these corporations to kind of solve the bigger problems that governments aren't solving, which is all these big global problems, in terms of, you know, everything from climate to, you know, economic justice to other things, but also kind of weigh in at the domestic level. So corporations are are being asked to align with one network or the other political network. And, we can see that it that turns out to be disastrous, everything from, you know, Miller Lite to Tesla. Right? So everyone everyone who aligns too strongly with one network is attacked by the other. Yet they're being asked and expected to kind of weigh in and and do things.
In terms of AI, I mean, the way corporations are gonna have to you know, in order to participate is they're they're gonna be collecting data on everybody in their company. They you know, that's how they're gonna build those internal AIs necessary to kind of automate,
[00:45:44] Shawn Yeager:
the cognitive capabilities necessary to to grow and scale. And and you're talking about, John, if I understand you, employees, not not necessarily customers.
[00:45:53] John Robb:
Oh, yeah. No. They'll they'll suck their employees dry of data inevitably. I was trying to push at at the senate a couple years ago trying to get people to adopt a data ownership policy so that we would own our own data, everything we put up. And that would allow us to have a say in just before AI hit. I think it was 2020, I think it was. And, just before AI hit, I was saying, okay. This data is the most important thing in the world. If you want to have a participatory economy where everyone's votes rise and there's, you know, limits and controls over, you know, what gets developed.
Give people ownership, of the data. Let them have a say and generate either income or, exercise control over how it's used. And then you have a stake, an equity stake, a good you know, one of those positive equity stakes in these AIs that are being developed, which will become the most valuable technological artifacts ever created. And they, you know, said it, of course, didn't believe AI was real at the time, so they kinda blew it off. And now we're in this situation where everybody's gonna be sucked, dry of their data strip mined. It's kinda like a it's worse than feudalism. Because feudalism was that the feudal lords are owned all the land and still kind of persists in England to a certain extent today.
And that serfs would work that land, and they would be obligated to pay off a certain amount of product.
[00:47:16] Shawn Yeager:
Mhmm.
[00:47:18] John Robb:
And so there was never any motivation for them to do anymore. But there were, you know, they were they were it has an they were obliged to give them the produce, and in in exchange, they were allowed to live on that land, take the access for themselves, and get some level of security. But in this kind of feudalism, we're sucked dry of our data to build these amazingly valuable artifacts, and there's no real reciprocal thing other than you get to use them sometime. Right? And you actually even have to pay us to use them. In a corporate level Go ahead. No.
In a in a company level, it's gonna be really tough because, screenwriters guild kind of got into this early, and they kind of set up some barriers. But most corporations are just sucking their employees' drive, all their emails, all their utterances inside the corporate setting, every work product. And that's gonna be fed into AI as to, not automation in a simple sense where it's not mechanical and it's it build cognitive replacement.
[00:48:18] Shawn Yeager:
Yes.
[00:48:19] John Robb:
They can think in complex and and and ways, and handle handle difficult exceptions. Because you know how the the standard thing is, like, 80% of all corporate problems are that take up 80% of the time are nonstandard tasks. It's easy to automate the bottom 20% because you can you could you could build something to do that. But if AI is gonna take that all over. Right?
[00:48:44] Shawn Yeager:
So the incentive for a given corporation to be able to tackle that is immense. I I did some work in 2020 on personal data ownership, legal ownership, and and and and technological sort of containerization, if you will. And Yep. Yep. I want to be more bullish, but, you know, what we learned was that most individuals would, you know, give away all their personal data for a free slice of pizza. And so, you know, it sounds like, John, you don't see a lot of bright, sunshiny days coming in that regard. But is there hope? Is there do you do you see anything on the horizon that would that would address personal data ownership and and would balance power in that regard to some degree?
[00:49:27] John Robb:
No. And so, it's not just your demographics. Most people wanna say data or they think, oh, just your demographic data. Oh, no. Some preference information. I'm talking, you know, how you blink your eyes and how you move your mouth, where your tone of your voice, your the way you interact with other people. All of that data is being sucked up because we've seen a strong correlation between the amount of data set, you know, applied to training and AI is directly correlated to its capabilities intelligence. A digital twin. I don't know if people are still using that term, but I think that was a term of art for some time. Yeah. No. The I mean, these AIs I mean, most of the guys actually working on these AIs come out of this mind model. Yeah. They're trying to create a model of the mind, and and it replicates a individual human mind unconstrained by, our biological limit. And, mistake was well, they, came up with a a way to kind of, reverse engineer it and kind of approximate a mind model by gathering all the social data, all this output, and then creating a model of that and then trying to make it act more like a single mind. Right?
It's always kind of approximation. But the social model that we're use we call LLMs and all this other stuff is is actually more interesting as a social model than a mind model. You know, as because it contains all of these different perspectives. I was writing fiction with this stuff. I can tease out the perspective of a an Italian mother in New York sitting in the fifties, and it will respond and think like that if you if you because there's literature and thinking that reflects that inside
[00:51:06] Shawn Yeager:
that that meta model. Yes. I mean, I know I know you're you're very fond as I am. Maybe fond is not the word. You're you're experiencing the the benefits of grok as compared to others. I believe you've you've posted on that recently, and it is tremendous. It's amazing. But as you say, I mean, it's to some degree, you know, a deal with the devil. So you've got, you know, this blended military and tech background. You've seen systems fail up close, sort of across the spectrum. For those leaders who are facing this AI driven future, who are looking to to collect it all, you know, perhaps with their employees more than their customers, What do failure modes look like? What what should they be aware of? What's the cautionary tale to doing this? Or or is it just, you know,
[00:52:01] John Robb:
the incentives are there, and they're gonna do it? Yeah. Don't let third party suck all your data out. Alright? And control that. And and limit your use of third parties to actually crunch it and create, virtual employees. Optimally, if you do create virtual employees, is that you so do that on a open source AI platform. And is it as independently as possible and that you have complete control of the data, if you have, you wanna, you know, build up employee trust and get them, you know, working with these AIs, these virtual workers, is that you give them some level of ownership stake in the value of that AI.
And that, I I I see that, you know, they're talking about agents now, which was inevitable. Mhmm. AI agents and and that's yeah. I've been through that since twenty years or twenty five years. We were doing Microsoft in the nineties. Yeah. They're they're redoing that whole thing. What we're really gonna look at is, social AIs. Okay? Social AI is, the way it's gonna roll out inside the corporation, inside our daily lives, inside of small companies and big companies is that it's going to be, a virtual employee or coworker or educator or adviser, and that you control the data that you put into them.
Okay? So if you train them on your preferences and or you train them as an employee on a process or a technique, the data that delivered to that AI is not going off to the cloud. It's not being sucked up to the the mothership to be used. And that, that AI, it won't be flat. It will have a personality, and that, it will have a personality that fits the role that they're in, that you like as as the person who's working with them, and that, it will grow and improve itself based on your interactions and will remember those previous interactions. And, the best way to do that, of course, was open source because you you you control the whole process.
So it's these social AIs build trust at the individual level with the other employees, human in particular. You know, they build trust with that human worker that they're working with. They handle complex tasks like they do. It's a partnership. It's not a, you know, not running a spreadsheet or something like that. It's not like an automated process in that regard. Should we trust them?
[00:54:32] Shawn Yeager:
If Or are we given no choice?
[00:54:34] John Robb:
Well, you you if you wanna compete, sure. I mean, you know, you have to trust them. But you you should only trust them if you control it or have a have a equity stake in the data that you're providing. I mean, if I'm a small company owner, I wanna know open source. I want my data sequestered. Because if I create something cool or do something proprietary, I want that to remain within my company. Okay? If I have virtual employees and human employees, I want that knowledge and and and insight contained, to producing you know, directed to producing value that I can I can acquire? Absolutely.
Personality aspect's really big in per in terms of providing, you know, building trust. So, I mean, we have all of these ways of building trust with other human workers. Right? Building cohesive teams with other human workers, evaluating performance of other human workers or of human workers in general, what a successful training effort looks like, what all of that onboarding, all that slot is already there for doing for producing, someone that will do the job that's required. The virtual work of this from the social AI fits into that slot. It's a natural fit.
And, so you have to have that personality. You have to have that vector of improvement. You have to have that, you know, interface that allows, you to interact with that slot. You know, agents won't do it. Current, like, dry Siri type things won't do it. You know, it's too generic. It's too flat. Those mind models, you don't want we're not talking about an AI brainiac. No. We're talking these AIs only have to be as smart as the the person that would be normally filling that role, which we've already achieved for many in many ways and nice, cooperative, willing to stand their ground on certain issues, all those things that you would want out of an employee.
And we know what those are. And it can change based on the type of company, and you can you can train for that. You can push for that. So, yeah, to the extent that you don't let the mother ship suck it all up, you're better off. And the same is gonna be with robotics. You know, robotic and AI is gonna you're gonna see robots everywhere in ten years. They'll all be AI run. So you have to have physical workers as well as virtual workers. It's the same thing. But you don't want it sucked up to a big company. That data has to remain with you or you're lost.
Somebody's gotta tap into it, that latent capability, and put you out of a out of a business really quick. So,
[00:57:15] Shawn Yeager:
I find that that is, as as often as the case, these things are invigorating and unsettling in equal measure. And and on that, note, perhaps on some time in. Hopefully no. Hopefully, invigorating. I would love to close it up, John. And, again, thanks for your time. Yeah. What what is what is one trend vector point on the horizon that you see that you think is going underappreciated
[00:57:44] John Robb:
that we should all be paying attention to? Well, the big rollout of social AI is gonna catch everyone by surprise.
[00:57:50] Shawn Yeager:
Define that for me, please. Well, it's like
[00:57:55] John Robb:
everyone's focused on, you know, cognitive, capabilities, you know, pushing the border. Can they do can they solve can you know, cure cancer kind of thing? You know, kind of pushing up to the limits of, you know, what they can do individual. The companies and and the individuals are working on trying to make socialize the voice quality, you know, the the emotional content of that, the, the, visuals that will be, you know, folded into augmented reality, the, personality. Okay. I see people working on personality, but it's, like, way out out of the mainstream, and and remembering past interactions and trying to get the that memory more cohesive over time. So you're probably running multiple in parallel LLMs, right, to do that. So they can check each other on the responses based on the previous interactions in history. And there's other kind of parallelisms that probably work. So those people are all often by the land right now, but they're gonna become the most dominant. So they're gonna catch a lot of people by surprise.
And the first ones who employ it successfully are gonna make some amazing businesses. So, that's gonna be a surprise. I I don't think anyone's really care you know, set up to handle what's gonna happen with fertility crisis because when when we finally figured out fifteen years from now and and everyone's focused on it, being because it's global, you can't you know, immigration won't solve it. Nothing will solve it. So, you're gonna have to figure out a way to keep on growing the economy without people. My personal solution to that, and this thing would be the the ultimate kind of kicker, is that, at some point, we'll probably figure out a way to handle the collapse of population associated with the fertility crisis, and it'll probably be something that we hate right now, like clonings and and things like that. Just horrible kind of institutional kind of things or technology driven things.
But how you grow an economy without people is that you give these social AIs the ability to buy as well as produce, purchase. They have a a vector of improvement of of where they wanna go and what they wanna do, and their interest will associate with that. As you build that personality out, they can become consumers. And if you have a billion virtual and, robotic workers in your economy in fifteen years, which is probably a low figure if this thing start really zooming. Because but we went from in 02/2007, there were zero smartphones. Okay?
In 02/2024, there were 5,300,000,000 smartphone users. We rewired the whole world in that short period of time. We could we will have that many virtual workers, if not more, probably closer to trillions. If they start buying it, the country that figures that out and gives them the level of autonomy necessary to buy and things purchase things, become consumers as well as producers, some level of, you know, personal autonomy is going to have the biggest economy in the world by far. Everyone else would be standing still thinking they're gonna use these AIs as, you know, virtual slaves. Yeah. And they'll always be producing and never taking. But the one that figures out the purchasing cycle would just walk away. I mean, you could see takes the rest of the world ten years, fifteen years to catch up. Well, well, it's gonna be too late because that that first economy will zoom so fast. It'll change everything.
[01:01:24] Shawn Yeager:
Fascinating. I did not see that coming. I'm I'm thinking now I'm thinking of share of wallet. You know, what does that look like when you've got billions of of AIs and agents and
[01:01:33] John Robb:
But even you know, a lot of people think this is the the cognitive problem I've seen with people even with, you know, people who understand economics and finance and stuff, is they tend to think of their personal wealth and ability to aggregate personal wealth. And they don't see any value in making sure that all boats rise. And they don't see that they would be much better off and richer than they are in the in the most acquisitive kind of scenario that we can imagine if everyone got richer and became, more prosperous.
So, because the economy would be so much bigger. The the whole environment would be so much wealthier and so much productive and so much more rich in technologies and everything else is that that, you know, level would be so much higher that you're starting from and that compared to where you were if you had kid a few people that were very inquisitive and locked it down. It took the majority of what was going on. So they can they couldn't make the cognitive leap between that kind of slow, targeted environment and this dynamic massive environment. Another thing is that the other thing I failed at was in 02/2010 when I came up with a kind of a autonomous corporation concept. It's a couple years before they had these, DAOs.
Yeah. Wait. It was, like, three years before they came out with Dows. Right? And Bitcoin was just started. We're trying to figure out a dollar. Right? It was a dollar at the time. How to incorporate it into this, but it was just too early days. And the idea was that we were gonna, come up with a company written entirely as software that would pay everybody who contributed to that economy because it was open source, open dynamic where people come in, do a a piece of work that was measurable by the software and become get paid a certain amount for it, potentially, or just earn equity in that company and all future earnings, like a kind of an annuity in the in the, value that was created. And that, I came up with this idea that it would be, acquiring, like, a metadata.
So take a picture of every environment, every every object, and then label it. Because, eventually, we're gonna be using this in AIs, and this data repository would be the biggest data repository in the world for object recognition and that metadata and all sorts of other things that we could have individuals contributing to. And as they accessed it for training these models, making sure that they're not making virtual copies of it so they could have infinite training sessions, is that, everyone will get paid for that, and that the value that would be created could be immense. That so for that original hour of work that you made $2 on could become worth hundreds of dollars downstream.
And it'd be paid constantly as they grew as the number of participants was you know, kept on growing. Of course, it didn't work because there was too many prerequisites for that, like a functional Bitcoin and other things like that. But while you invented Pokemon Go, among other things. Yeah. Well yeah. But the thing is that that kind of company could have created the ecosystem necessary to kind of challenge it. So if you have this kind of alternative things, they can grow really quickly in this kind of network environment. Just like how quickly network politics have evolved and changed and and and and, the dynamism is is is shifting in in unexpected ways, we could see that's with the economic system as well.
I always thought that, you know, crypto would be better off within the context of an equity, and and corporate kind of context, help, you know, alleviate some of the elements associated with with trust at scale.
[01:05:24] Shawn Yeager:
And I think that yeah. And and and there's a lot of that emerging. So wow. Fascinating, John. I know Global Gorillas is one of your primary focuses and projects. I'll be sure that everyone gets a pointer to your Substack. As I say, I've really, really enjoyed it. I will, I do recommend everybody check it out so we can, keep getting these glimpses of of what's coming in the future. Thanks so much for your time today, John. Oh, you're welcome. Really appreciate it. Talk soon. Yep. Bye bye. Yep.
Introduction and Background of John Robb
Userland Software and Early Internet Innovations
The Concept of Trust and Its Evolution
AI and Centralized Systems: Risks and Impacts
Emergence of Networked Organizations and Political Swarms
AI's Impact on Society and Individual Reality
Organizational Resilience in an AI-Driven World
The Future of AI in Corporations and Society
Conclusion: Future Trends and Predictions