The Plan for a Smooth Transition to a Post-AGI World

Show notes

In this episode of the Big Bang Tech Report, Jens de Buhr and tech strategist Alvin Wang Graylin unpack Alvin’s new Stanford paper “Beyond Rivalry” — a blueprint for navigating the geopolitical, economic and societal turbulence of the AI age.

They break down why the U.S. and China aren’t actually in an AI Cold War, what AGI really changes, and how a “guardian AI” could make the technology safer for everyone. Alvin explains his two big ideas for a soft landing after large-scale job disruption: a “GI Bill for the AI Age” and a global “AI Marshall Plan.”

Plus: Why space-based data centers won’t save us, why 2025’s real breakthrough is small but mighty AI models, and what headline Alvin hopes the world will read in 2026.

Want to go deeper? Alvin’s Stanford paper “Beyond Rivalry: A US–China Policy Framework for the Age of Transformative AI” lays out the full blueprint behind this conversation — from de-escalation and shared AI safety to practical transition ideas like a “GI Bill for the AI Age” and a global “AI Marshall Plan.”:

About the guests: Jens de Buhr – Founder & CEO JDB Holding; publisher of DUP UNTERNEHMER; co-founder BIG BANG AI Festival. He connects business, politics, and research to shape Germany’s digital future. Linkedin: Jens de Buhr Web: https://www.dup-magazin.de

Alvin Wang Graylin – Global tech strategist; author “Our Next Reality”; Chairman Virtual World Society. 35+ years across AI, semiconductors, XR, cybersecurity; ex-HTC/Intel/IBM/Trend Micro; 4x founder; investor in 100+ startups; Digital Fellow at Stanford HAI; lecturer at MIT; advisor on AI policy, global governance, and the post-AGI transition. Linkedin: Alvin Wang Graylin Substack: https://substack.com/@awgraylin | X: https://x.com/AGraylin | https://ournextreality.com

In dieser Folge des Big Bang Tech Report sprechen Jens de Buhr und Tech-Stratege Alvin Wang Graylin über Alvins neues Stanford-Paper „Beyond Rivalry“, ein Fahrplan dafür, wie wir die geopolitischen, wirtschaftlichen und gesellschaftlichen Verwerfungen des KI-Zeitalters meistern können.

Sie erklären, warum die USA und China gar nicht in einem echten Kalten KI-Krieg stecken, was AGI tatsächlich verändert und weshalb eine Art „Schutz-KI“ die Technologie für alle sicherer machen könnte. Alvin stellt außerdem seine zwei großen Ideen für eine sanfte Landung nach massiven Jobverschiebungen vor: ein Bildungs- und Unterstützungsprogramm für das KI-Zeitalter, angelehnt an das historische GI-Bill, sowie einen globalen KI-Marshallplan, der Vorteile der Technologie weltweit fairer verteilen soll.

Außerdem thematisieren sie, warum Rechenzentren im All keine Lösung sind, weshalb die eigentlichen Durchbrüche 2025 von kleinen, enorm leistungsfähigen Modellen kommen und welche Schlagzeile Alvin sich für 2026 wünscht.

Show transcript

00:00:00: How can we de-escalate the tension between US and China?

00:00:03: How can we make sure that when we do de-escalate, we make an AI data safe for the world?

00:00:09: And then lastly, when that AI gets diffused to the world and then we start to do job displacement, how can we have a soft landing for

00:00:16: the world?

00:00:19: Welcome to a new episode of the Big Bang Tech Report.

00:00:25: We bring clarity to a world that's changing at high speed.

00:00:29: My name is Jens de Boer.

00:00:31: I'm a journalist covering technology, business, and politics.

00:00:37: And I'm Alvin Greiland.

00:00:39: I've been in the tech and AI field for thirty-five years and have worked in China and the US, and I think I'll give a different perspective that might be helpful to everyone.

00:00:50: Yeah, and you have presented a paper with very famous people, a Stanford paper.

00:00:58: And can you tell us a little bit what is this paper about?

00:01:03: Why is it important and who are the other authors?

00:01:08: Yeah, I mean, this is a volume called The Digital's Papers.

00:01:13: This is a second year which Stanford has been doing this, essentially bringing thinkers in the field, you know, people like Eric Schmidt or Yashio Benjio or Nick Bostrom.

00:01:23: They're the type of people that we look on as the luminaries in the space, and it's a real honor to be part of that volume.

00:01:33: A paper, we think about twenty pages or something like that, but is this a real book, isn't it?

00:01:40: Yeah, so these twenty papers become about five hundred pages in terms of a collection of really interesting and diverse thoughts that are telling the world what we need to be thinking about and also what we can do about it from an AI policy perspective.

00:02:01: What's your main point of your paper, of your thoughts?

00:02:05: Explain us a little bit.

00:02:06: What have you published?

00:02:10: Within the volume there's everything from economics, to jobs, to safety, to different types of scaling technologies.

00:02:19: Mine is really focused on the main areas.

00:02:23: It's called beyond rivalry.

00:02:25: And it's really focused on the geopolitical implications.

00:02:28: and the economic implications of AI.

00:02:31: And how can we de-escalate the tension between US and China?

00:02:36: How can we make sure that when we do de-escalate, we make an AI data safe for the world?

00:02:42: And then lastly, when that AI gets diffused to the world and we start to have job displacement, how can we have a soft landing for the world?

00:02:50: So these are the three main questions that I deal with in my paper.

00:02:55: But right now we have the impression that there is something like a new Cold War.

00:03:00: There was a Cold War between the US and Russia, also Soviet Union.

00:03:06: And now it's something similar.

00:03:08: If you look at the administration, you are right now in Washington DC.

00:03:13: And we have the impression in Europe that there is something like a new Cold War between an AI war.

00:03:21: How do you see that?

00:03:22: Yeah, I mean, unfortunately, this is the narrative that's been painted by a lot of people, you know, both in the press and also some people in government, to create an enemy so that they can gain both internal resources as well as, you know, more attention.

00:03:42: But if you look deeply, if you look at how the US and China are taking are approaching AI.

00:03:50: They're taking very different perspectives.

00:03:53: The US narrative right now is that the US has to create the AGI first and needs to build giant data centers and needs to open up their national labs.

00:04:04: and all this secret data is given to the private labs for training.

00:04:09: And once you get to AGI, then you go to ASI.

00:04:13: And that ASI allows you to rule the world.

00:04:17: And that's kind of, it's something called the decisive strategic advantage is the strategy that American government and labs are going for.

00:04:25: The Chinese approach seems to be much more about how do we take this technology and grow our economy?

00:04:31: How do we put it into manufacturing, into medicine, into education?

00:04:37: And make sure that the technology gets out to the public so that it creates a better lifestyle for people.

00:04:43: And one is creating closed source, which is what America is doing.

00:04:47: All the leading labs in America are closed source.

00:04:50: All the leading labs in China are actually open source.

00:04:53: So they're actually not racing.

00:04:55: I think it's a very strange situation where I think America thinks that there's a race, but there actually isn't a race and there isn't a Cold War.

00:05:05: But we're treating it that way.

00:05:08: But people who are not so familiar with the subject, what is AGI?

00:05:14: What does it mean for you?

00:05:15: And what is new?

00:05:17: What is coming up on the horizon?

00:05:19: And when will it

00:05:19: come?

00:05:21: Yeah, so AGI is the term Artificial General Intelligence.

00:05:26: And the idea is that this is a technology that can do the work of an average worker, or maybe even more than average.

00:05:37: The current workers, the current technology today is actually already in some areas smarter than many humans, maybe even some of the top humans.

00:05:46: So what's happening today is that there's essentially a jagged edge of AI, where in some things it's very good and other things is still subpar.

00:05:54: And so it hasn't gotten mass adoption in all of the different labor pools.

00:05:58: But what we'll find is that when AGI arrives, and this is the promise, it will be able to learn everything and become good at almost anything that humans can do in front of the screen.

00:06:09: It will be able to do as good or better, and it will continue to learn so that it improves.

00:06:16: It improves itself so that we do not know after a while what it is doing.

00:06:22: So it improves itself.

00:06:24: just like you have children when they go to elementary school or high school or college, they continue to improve.

00:06:30: And the same way with this AI, if you bring on a new AI, it may come to your company, it doesn't understand your processes, you teach it, and it learns all these capabilities and then it's able to do any job inside your company.

00:06:43: That's the idea of what AGI is supposed to do.

00:06:48: Now, the one thing I think people don't realize is that we don't need to get to AGI to have most of this capability.

00:06:55: It may not be self-improving, it may not be able to do everybody's job, but for it to do specific tasks, for it to do low-level jobs or entry-level jobs, many of the models that are out there today can already accomplish some of these things.

00:07:13: So you talked about... What is the partnership between China and the US?

00:07:20: that they will work together in some parts?

00:07:24: But you need trust for that.

00:07:26: You need trust that if you are not rivals, if you want to work together, how do you see this?

00:07:33: What are your opinions about trust?

00:07:36: Yeah, so this is actually one of the core issues is that both sides right now don't trust each other.

00:07:42: And if you don't trust each other, you cannot cooperate because there's no basis for cooperation.

00:07:48: And the idea of the paper is to say, hey, look, we don't necessarily trust each other, but we have a lot of common risks.

00:07:57: and common benefits, right?

00:07:59: So if we start to look at AI right now as a weapon or as a tool for dominance, then you want to keep secrets and you don't want to cooperate.

00:08:09: But if you start to think about how can this technology become safer so that it does not hurt our society, right?

00:08:17: If you have an unsafe technology that falls into the hands of bad actors or itself, you know, becomes somehow either directly or unintentionally creates harm for people.

00:08:30: That's something that I think both countries and actually all people in the world does not want.

00:08:35: So we have a lot of common, common risks that we need to be looking at.

00:08:40: And, you know, just like even during the Cold War, even during the heights of the US-Soviet Union rivalry on nuclear weapons.

00:08:50: They work together.

00:08:52: They agreed to inspect each other's silos.

00:08:55: They make sure that we start to disarm, actually, to destroy some of the bombs.

00:09:00: We also work together to make sure there was no proliferation into a lot of rogue countries or for terrorists to get access to it.

00:09:07: So when there's a common threat, even when you have competition, it still makes sense to work together.

00:09:15: And for things like the ozone layer, the Montreal Protocol, when we worked together in the eighties and nineties, we were able to actually reduce the... the CFC emissions, which then actually help to restore the ozone layer to protect us from the dangerous rays.

00:09:36: So it is possible, but we have to start by doing things that are easier and more trustworthy, which means separating national security issues from civilian issues.

00:09:47: So if we start work on AI safety, I think everybody can agree on that.

00:09:51: If we start working on health related things, a drug discovery, where we work on solving cancer together, I think those are the kind of things that nobody should have an argument that we should not work together.

00:10:05: You are talking in your paper about soft landing and soft landing we need because a lot of people are fearing that with AGI and with the next phase of AI, we will lose a lot of jobs because the technology is very strong and it can do a lot of work.

00:10:26: So what do you propose?

00:10:28: How do we succeed?

00:10:30: a soft landing?

00:10:32: Yeah, so there's two kinds of soft landings.

00:10:34: There's a domestic national soft landing, and then there's a global soft landing.

00:10:38: And I have two different plans for that.

00:10:40: One plan is called the GI Bill for the AIH.

00:10:43: So after World War II, America had fifteen million soldiers that was coming back.

00:10:49: and they had no place to go.

00:10:50: And so America government said, hey, we're going to give you free college education.

00:10:54: You go to any school you want.

00:10:56: You can have free housing loans.

00:10:58: You have one to two years of stipends, so you can live and you can have free medical.

00:11:03: And with that type of a security net, they essentially allowed America to grow safely and stably and to absorb these thirteen million people without creating economic shock.

00:11:15: And then actually created a boom in America because now there's, you know, there was, within ten years there was twice as many educated people in the country.

00:11:23: And so that's what we need to think about with... with this AI job displacement issue, because for countries like America and actually Germany as well, with high amounts of white collar workers, the more advanced the country, the more white collar workers there are, the more exposed you are.

00:11:41: And so we need to think about how do we treat these workers the same with the same dignity that we treated soldiers after a war.

00:11:48: And now the other issue is how do we make sure that the technologies that we create is evenly distributed to both developed and lesser developed countries.

00:11:59: And so I propose this idea of a Marshall plan for AI.

00:12:03: Just like after World War II, America spent tens of billions of dollars helping rebuild parts of Europe and parts of Asia.

00:12:13: We need to think about how should the advanced countries, China, US, maybe parts of Europe, can help to distribute some of the benefits that are coming from AI and AGI to the global south so that they can rise with it with the rest of the economy together.

00:12:32: Because you don't really want to create an even greater divide.

00:12:38: creates challenges of both mass migration as well as just societal issues and conflict.

00:12:46: So I think these are two major plans.

00:12:48: There's a lot more details in the paper and it's hard to discuss in a short time.

00:12:53: But you have published the paper and now you are in DC and Washington DC.

00:12:59: Is there an echo on this paper and what do the people say about your thoughts?

00:13:06: Yeah, so, I mean, I've been here for five days and the good news is pretty much everybody I've talked to and explained these ideas to, they've actually been very receptive, in fact, more so than I would have expected.

00:13:18: And, you know, I think this also is reflected in terms of the US government policies.

00:13:25: You know, the last week they started to open up the H- two hundred chip sales to China.

00:13:33: So that.

00:13:33: is essentially a signal to say, hey, let's reduce the tension.

00:13:38: So, you know, I think that people all want the same things.

00:13:43: They want to have a peaceful world.

00:13:45: They want to have a good place for their children to grow up.

00:13:47: And the type of ideas that I'm trying to propose really gives us a better chance for that to happen.

00:13:56: Speaking of new ideas, speaking of AGI, there are new plans that especially In the US, you want to establish in the space data center.

00:14:10: Is it a good idea or is it something we will see in a few years?

00:14:18: Yeah, so I think in the last two to three weeks, there's just been this very high hype and coverage on this idea that now that we have I guess maybe some difficulties in building data centers on Earth that we should go and build it in space.

00:14:39: Because the key assumption is that, hey, you're in space, it's very cold, and one of the problems with the data centers on Earth is that you have to cool it, you have to give a lot of water.

00:14:49: Unfortunately, that assumption is actually a false assumption because When you're in space, the only way to get rid of heat is through black body radiation, which is a very inefficient way to get heat off of a particular surface.

00:15:07: In fact, it's about maybe ten to a hundred times less efficient than using either air cooling or using liquid cooling.

00:15:16: And if that's the case, then you have to have these giant radiators in space, which is actually quite heavy.

00:15:23: And I did a little bit of math on this, and the funny thing is, right now there's about five thousand tons of mass that is human-made floating in space today.

00:15:35: Out of the, you know, thirty plus forty years of us sending things into space, we now have about five thousand tons.

00:15:43: To make a one gigawatt data center, it requires something around fifty to sixty tons.

00:15:52: So that's somewhere between five to ten times of all the tonnage that has been put into space for us to have a one gigawatt data center.

00:16:03: And essentially with the current launch capacity, this would take many years, a lot more than a few years, probably ten plus years to send this up.

00:16:13: And the assumption is that, and the cost to make each kilogram to go up is around two thousand dollars.

00:16:22: For it to be economical, we need to be around one hundred dollars.

00:16:26: So to come down twenty times in launch costs, we, you know, and it will take, you know, five to ten years to send it up.

00:16:33: It just is not actually practical.

00:16:37: And it is not economical.

00:16:40: And the last thing is that it also is not necessarily helpful because, you know, we're putting every Every month the world is putting together tens of gigawatts of new generation.

00:16:54: For us to spend a few years to put one gigawatt up in the space, it doesn't help anything.

00:17:01: So that's not a good idea.

00:17:03: There are more problems I hear than some...

00:17:07: There's economic problems, there's capacity problems, and there's just value problems.

00:17:14: And then lastly, once you put something into space, How do you maintain it?

00:17:19: Because when you're on earth, if something breaks, you can have somebody go in and fix it.

00:17:22: When you're in space, we can send up to go fix these things.

00:17:27: So it really doesn't make sense.

00:17:29: And I think the reason that people are talking about it is because it somehow helps their company, because they're either a space company or they're a chip company.

00:17:39: And so it's another hype cycle to help promote whatever they're selling.

00:17:48: We are very close to finish for this year, twenty-five.

00:17:55: And if you look back, what was the headline of the year?

00:17:59: What was something with the breaking news of the year for you, personal?

00:18:04: Yeah, I mean, I don't know if it's a breaking news, but I think the biggest breakthroughs this year have really come in the area of making models smaller, making models more capable and making models being able to be trained.

00:18:17: without giant resources.

00:18:19: And now, you know, over the last few weeks, there's been multiple models that are now just a few billion parameters that are performing extremely well.

00:18:27: What used to take hundreds of billions of parameters just a half a year ago.

00:18:32: And that means that AI can be accessible to everyone.

00:18:38: Yeah.

00:18:39: And if you look to the next year, what is a wish for you?

00:18:44: What could be the headline of the next year of twenty twenty

00:18:48: six?

00:18:49: If I could wish for something, I want there to be an announcement that says we will create a global organization that starts to think about making AI that is a global public good.

00:19:04: Something that we build together and we share the benefits together.

00:19:08: If that can be agreed and whether we call it a CERN for AI or a global AI project, something that treats AI not as a weapon, But as a tool for global advancement, I think that would be the most amazing outcome that can come in the coming year.

00:19:28: Okay.

00:19:28: And in general, I think we both, we hope that the war in Ukraine and Russia that it will end and that we will have a peaceful next year.

00:19:38: And so our episode is over.

00:19:43: Thank you again, Alvin, for sharing all your information, your news.

00:19:50: So guys, if you like our show, please send us some emails to deburr.de.

00:19:59: share your ideas and we want to support you with information and with orientation and inspiration next year again.

00:20:09: And yeah, Alvin, do you have something that you

00:20:13: want to share?

00:20:14: Thank you all the people who have been watching us and thank you for sharing our ideas with your community.

00:20:22: Keep talking to us and keep letting us know what you care about and what we should be focusing more on.

00:20:27: So thanks again.

00:20:30: I think that's the key.

00:20:32: Keep talking.

00:20:33: Have a conversation.

00:20:35: Share ideas.

00:20:36: And to understand what's going on.

00:20:40: And right now there's a lot of things are going on.

00:20:43: There's a lot of flow.

00:20:46: And I have yesterday had a speech here on a cruise.

00:20:50: And people are asking me a lot to understand.

00:20:53: And so we have intensive times.

00:20:57: And we will see us again in January.

00:21:00: And so I'm really happy to have this show here, the Big Bang Tech Report.

00:21:06: And thank you again and let's stay tuned.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.