AI, Robotic, Jobs - We need a new mindset
Show notes
In the fifth episode of the BIG BANG Tech Report, Jens de Buhr and Alvin Wang Graylin once again sit down to make sense of a rapidly shifting tech landscape. Fresh from Davos, Alvin challenges dominant beliefs about AGI, data centers, jobs, and global competition — arguing that economic disruption may arrive long before true artificial intelligence does. They discuss why trillion-dollar AI investments risk creating bubbles, why models are becoming commodities, and why cooperation matters more than winning an “AI race.” At the core of the conversation is one message: the future of AI will be decided less by technology than by the mindset with which societies choose to use it.
About the guests: Jens de Buhr – Founder & CEO, JDB Holding; publisher of DUP UNTERNEHMER; co-founder BIG BANG AI Festival. He connects business, politics, and research to shape Germany’s digital future.
Alvin Wang Graylin – Global tech strategist; author “Our Next Reality”; Chairman Virtual World Society. 35+ years across AI, semiconductors, XR, cybersecurity; ex-HTC/Intel/IBM/Trend Micro; founder/investor; Stanford HAI Digital Fellow; MIT lecturer; advisor on AI policy and governance.
Links: LinkedIn | Substack | X | https://ournextreality.com
Wir brauchen ein neues Mindset für KI, Roboter und Jobs In der fünften Folge des BIG BANG Tech Report sprechen Jens de Buhr und Alvin Wang Graylin erneut über die tiefgreifenden Umbrüche durch Künstliche Intelligenz. Direkt nach Davos hinterfragt Alvin gängige Annahmen zu AGI, Rechenzentren, Arbeitsmärkten und globalem Wettbewerb — und warnt davor, dass wirtschaftliche Verwerfungen früher kommen könnten als erwartet. Im Fokus stehen die Risiken einer neuen KI-Blase, die rasche Kommodifizierung von Modellen und die Notwendigkeit internationaler Zusammenarbeit. Zentrale These der Folge: Nicht die Technologie entscheidet über die Zukunft, sondern das gesellschaftliche Mindset im Umgang mit ihr.
Zu den Gästen Jens de Buhr – Founder & CEO, JDB Holding; Publisher von DUP UNTERNEHMER; Co-Founder des BIG BANG AI Festivals. Er vernetzt Wirtschaft, Politik und Forschung, um Deutschlands digitale Zukunft mitzugestalten.
Alvin Wang Graylin – Globaler Tech-Stratege; Autor von „Our Next Reality“; Chairman der Virtual World Society. Über 35 Jahre Erfahrung in KI, Halbleitern, XR und Cybersecurity; Ex-HTC/Intel/IBM/Trend Micro; Gründer und Investor; Stanford HAI Digital Fellow; MIT-Dozent; Berater für KI-Politik und Governance.
Links: LinkedIn | Substack | X | https://ournextreality.com
Show transcript
00:00:00: What people don't realize is the idea of a job, the idea of trading time for money is actually a very recent phenomena.
00:00:07: For the majority of the three hundred thousand years of our history, nobody had jobs.
00:00:12: We worked, we worked, but we didn't have jobs.
00:00:15: We worked in terms of helping our family survive, our tribe survive, we grow food, we go hunting, we eat baskets, but we weren't being paid by somebody.
00:00:24: We were doing it for our own good and for the good of our community.
00:00:28: And I think that's something that we need to start thinking about.
00:00:31: How can we start to go back to the roots of what work is?
00:00:39: Welcome to a new episode of Big Bang Tech Report, the podcast that brings clarity to a world where change is faster than ever.
00:00:49: My name is Jens de Boer.
00:00:51: In this podcast, I regularly speak with my co-host, Alvin Greiland,
00:00:56: a globally
00:00:57: recognized AI expert who has just returned from Davos at the World Economic Forum.
00:01:05: This year, this year, Arvind was everywhere in Davos.
00:01:10: Nine panels, high-level background talks, multiple interviews with leaders from politics, tech and business.
00:01:20: After these intensive days, he published a list that is now circulating fast among AI insiders.
00:01:30: We must question what we think we know.
00:01:35: Ten statements about AI, AGI, robots, economy, and risk.
00:01:40: Challenging much of what we believe to be true.
00:01:44: Alvin, great to have you back.
00:01:47: Yeah, thanks for that introduction.
00:01:49: Hard to live up to.
00:01:52: So let's start.
00:01:54: Alvin, your first statement is striking.
00:01:56: AGI is coming.
00:01:58: But the economy can still break without it.
00:02:02: Most people
00:02:03: believe disruption starts with AGI.
00:02:05: You say the breakdown can happen earlier.
00:02:07: Why?
00:02:09: Yeah, I think there's been a lot of discussion to say, what is AGI?
00:02:13: AGI is when it can replace human workers.
00:02:16: But what the data is showing and what the economists studies are showing is that you don't need to replace a hundred percent of workers for the economic impact to take place and for the commoditization.
00:02:28: of cognitive work to take place.
00:02:30: And particularly in western markets where sixty, seventy percent of workers are white collar workers that do their work in front of a screen or can be replaced with something that's done in front of a screen, those are going to be the most exposed.
00:02:44: And, you know, we haven't seen as much of the disruption or the displacement as people think because the curve is really more like a flat curve and then it drops when it gets to about eighty percent of the of the task being doable by AI and we're getting very, very close to that.
00:03:02: With all the agentic technologies that are coming out, these systems are becoming amazingly capable.
00:03:09: So in fact, I think later we're gonna talk about Cloudbot, which just came out last week and it's becoming, it's on fire.
00:03:19: So we are coming closer to the future.
00:03:21: Yeah, so even without a full AGI that can learn and get even better by itself, we will, I think, this year start to see significant displacement.
00:03:33: And if the economy is not supported somehow by government policies, then that could definitely have impact.
00:03:42: that goes beyond what people expected.
00:03:46: That connects you directly to your second point.
00:03:50: Scaling AI compute isn't the answer.
00:03:54: It's part of the problem.
00:03:56: What does that mean?
00:03:58: Yeah, so as we've talked about before the last few years, the focus has really been building bigger models, bigger data centers, more GPUs.
00:04:08: probably two or three trillion dollars expected to be spent in the next two or three years on new data centers.
00:04:14: And the models that are coming out, and as Claire and you've been talking to a lot of the experts in the space lab, talking to a lot of people at Davos, who are the godfathers of AI, there's a strong belief that the AI is going to become, a lot of it's going to move more to the edge.
00:04:32: and the edge meaning to computers, to laptops, to phones, to glasses, where a lot of the work will be done there and less and less of the work will be done in the cloud.
00:04:46: And the cloud is really only being used for the initial training, but inference can be done without these giant data centers.
00:04:53: But we're spending so much money on the data centers that it's going to create a very large economic bubble.
00:05:00: that if it pops, it can have dramatic detrimental effect on the stock market and on society and on employment and on general sentiment of the consumer.
00:05:13: So you do not believe in the strategy of Jens Noong and NVIDIA to Belgium, bigger and bigger and bigger and more danger centers?
00:05:22: I don't think it's... I mean, Jensen is doing what he's doing because he has to, right?
00:05:26: Because his business is to sell.
00:05:28: more GPU.
00:05:30: So he has to push this narrative.
00:05:33: But in terms of where the non-GPU makers and the non-data center sellers, they have a different incentive than theirs is to serve the customer at the lowest cost because that's how you make money.
00:05:47: And right now, pretty much none of the AI companies are actually profitable.
00:05:53: All the ones, particularly the large ones that are building out hundreds of billions of dollars of data centers, there is just no way to get a recoup on that capex.
00:06:03: And that's a very high-risk gamble that they're playing.
00:06:10: They're not just playing with their company, they're actually playing with the global economy.
00:06:14: And I think they should be more careful doing that.
00:06:18: That leads us to the third point.
00:06:20: There is an AI bubble.
00:06:22: Models are soon commodities.
00:06:24: I think you've explained a little bit about it right now, right?
00:06:27: Well, yeah, I think it goes beyond that.
00:06:30: The commodity issue is really the key.
00:06:33: If something is widely available and is relatively low cost, there's really no way to get profits.
00:06:42: The market pushes it to zero marginal cost, to zero margins for the providers because it's generally available.
00:06:50: You're not going to pay a lot of money to buy air because it's available everywhere.
00:06:54: or you're not going to pay for sunlight because it's available everywhere.
00:06:58: And this is what's happening right now with these AI models.
00:07:01: They're getting very, very capable and they're open source now.
00:07:04: So, you know, in fact, we're going to get to this a little bit later, but yesterday, the Kimi K. Two point five just dropped.
00:07:12: And the day before you had, you know, Quen three max that came out.
00:07:16: And these systems are essentially peer competitors to the frontier models coming out of the West.
00:07:23: Those models out of the West are costing billions or tens of billions per run to make these capable.
00:07:30: But then within a few days or a few weeks, they are either overtaken or come at peer with free models.
00:07:39: So how can that be a long-term sustainable strategy when you're paying a lot more than then you can ever get back?
00:07:51: That means your fourth question or your fourth statement is there is no win on the AI race.
00:07:58: The tradition is key, but if there is no winner, there are losers or not.
00:08:03: Yes, so here's the issue, is that there are no winners, but the players today, at least the players in the West, believe there are winners, and they are spending and racing like there are winners.
00:08:15: And if they continue to behave that way, then everybody loses because they will create this giant bubble, this giant race that makes the economy go down.
00:08:25: They will also create a wall between labs and between countries, which then makes the models unsafe because they're also throwing away all the safety guardrails because they want to go faster.
00:08:39: So everything that they're doing to try to win is actually the exact thing that will make the world more dangerous and make the economy collapse.
00:08:48: Hmm.
00:08:49: Point five might be the most idealistic or the most necessary.
00:08:54: We must jointly develop AGI and share its benefits.
00:08:59: It's today's geopolitical climate.
00:09:02: How realistic is this?
00:09:05: Yeah, I think this is definitely the big issue, particularly with the US right now going around and imposing as well on everybody.
00:09:17: There's not a lot of sense of cooperation.
00:09:20: But actually, being at Davos, I got a different sense.
00:09:24: I think that, in a way, the behavior of the US is actually pushing everybody else together.
00:09:30: So, notice recently, there was an announcement between... Canada and China, between India and Europe, between the different Nordic countries and within Europe.
00:09:43: There's a lot of communication now to say, okay, now how does the middle countries, how do they come together?
00:09:50: How do you stop from being bullied?
00:09:52: And the idea of digital sovereignty was also very hot topic and technology sovereignty and energy sovereignty was all very hot topics at Davos, which means that For little countries that don't have either the talent or the data centers to make their own models, it makes a lot more sense to work together.
00:10:14: It makes a lot more sense to really treat AI as something that is, like we said earlier, a public good.
00:10:20: If it's a public good, then let's share it and let's maximize value for everyone instead of maximizing profit for a few.
00:10:28: Okay.
00:10:29: Now we have a part that is fearing a lot of people.
00:10:33: You say AI and robots will likely take most of our jobs.
00:10:39: And now, but what is the but?
00:10:44: And I think there's still debate on this in terms of really more about timing than if it happens.
00:10:50: But it's really saying, is it five years or is it ten years?
00:10:53: Is it two years?
00:10:54: Is it twenty years?
00:10:55: But the key is it will happen.
00:10:58: And it will happen in lumps.
00:11:00: It won't happen all at once.
00:11:02: Because what the data is showing is that AI is good at certain things.
00:11:08: It's a jagged intelligence.
00:11:09: It's good at certain things, very, very good.
00:11:11: And other things, it's very bad.
00:11:14: And it's really about how much data does it have and how much can the training process be able to automate that learning.
00:11:21: And right now, a lot of the data for certain tasks are not available, and particularly for physical tasks.
00:11:26: So if you're doing physical labor, it will take longer.
00:11:29: But eventually, robots will do the physical labor as well.
00:11:34: Now, the thing that the but part is the important part is that what people don't realize is the idea of a job, the idea of trading time for money is actually a very recent phenomena.
00:11:48: for the majority of the three hundred thousand years of our history, we nobody had jobs.
00:11:54: We worked, we worked, but we didn't have jobs.
00:11:56: We worked in terms of helping our family survive, our tribe survive, we grow food, we go hunting, we baskets, but we weren't being paid by somebody.
00:12:06: We were doing it for our own good and for the good of our community.
00:12:10: And I think that's something that we need to start thinking about.
00:12:13: How can we start to go back?
00:12:15: to the roots of what work is, but not the roots of what job is.
00:12:20: Job came when we started to hire people to do work for somebody else.
00:12:25: It's not to benefit themselves.
00:12:27: It's to make money, then that money can be used to buy things.
00:12:32: And I'll actually ask you a question that you may be surprised.
00:12:35: What year on this planet did more than half of the world have jobs?
00:12:43: I don't know.
00:12:43: Tell me.
00:12:45: Twenty-fifteen.
00:12:48: Twenty-fifteen was the year when half of the population of workers on this planet were being employed by somebody else as a job.
00:12:57: And this is from the labor organization, the global labor organization.
00:13:01: So it is, you know, the official definition of jobs.
00:13:05: It's like essentially just yesterday, right?
00:13:08: We think that this has been around for thousands of years and it's not true.
00:13:11: Actually wage labor, the concept of wage labor just happened maybe a few hundred years ago and really more so since the industrial revolution about two or three hundred years ago.
00:13:22: And that's only in the western industrialized nations.
00:13:25: In the lesser developed countries, it's really only in the last, you know, fifty years that that's really started to happen.
00:13:36: Your next statement, I think we have already talked about it.
00:13:39: It's about human-eyed robots.
00:13:41: and you say human-eyed robots are not coming to your home soon.
00:13:45: We didn't talk about it.
00:13:47: We talked about human or we talked about robots and general taking of jobs.
00:13:51: But right now, if you saw a CES, everybody's talking about human or robots and you go to Davos and there's hardly any.
00:13:57: I think I saw one human or robot the entire week.
00:14:02: Because, well, one is.
00:14:03: I think the agenda of Davos is set a year in advance, right?
00:14:07: Essentially, at the end of the week, they set next year's agenda.
00:14:11: So last year, nobody talked about human robot.
00:14:13: So this year, nobody is showing human robot.
00:14:18: But the other thing is that human or robots in general is not necessarily the best form.
00:14:22: And I also spoke to a lot of people who are in robotics.
00:14:24: And they're saying, yes, robot is important.
00:14:26: But not everything needs to stand on two legs.
00:14:29: It could be on a platform.
00:14:30: It could be a fixed robot with just arms.
00:14:32: Robots will do a lot of work.
00:14:34: And it will start in the factories.
00:14:36: It will start in business.
00:14:38: But having it in your home in a very dynamic environment where every Every home is different.
00:14:44: This is a significant challenge for robotics training today.
00:14:48: And there isn't necessarily a very established and clear way of how to train robots in that environment.
00:14:55: So that's what I'm saying is going to take a while.
00:14:57: I'm not saying it won't happen.
00:14:58: It will happen at some point.
00:15:00: But it won't be around the corner like what you hear from Elon Musk.
00:15:06: You mentioned Elon Musk because he was there and I think he told us in two or three years everybody will have an optimist.
00:15:17: Like I said, in business, everybody right now who is talking on stage, they all have a certain incentive.
00:15:26: They're trying to sell robots or they're trying to sell GPUs or trying to sell spaceflight.
00:15:31: They're trying to sell you something.
00:15:34: I think we have to take everything with a grain of salt when it comes from a particular business owner because they have an agenda in terms of what they're saying.
00:15:43: So you are the only one who has sold the truth?
00:15:46: Well, I don't know.
00:15:47: I'm the only one.
00:15:47: But at least I am currently not affiliated with anything that has a conflict of interest.
00:15:54: This is a social point.
00:16:00: The social safety net for AI is required and affordable.
00:16:04: Affordable, explain us.
00:16:06: Yeah, so you hear a lot of people say, oh yeah, well, we know that people are going to lose their jobs, but there's just no way, how do you pay for this?
00:16:15: This is too expensive.
00:16:17: But what people don't realize is that the one thing that has continually happened for the last hundred years in technology is that as technology comes, things become cheaper and cheaper to make.
00:16:27: And the things that we need, things like food, things like energy, things like machines, they're getting cheaper and cheaper to make and their cost is a fraction of what they were in the past.
00:16:39: So technology creates deflationary forces for anything that can be automated.
00:16:44: And so necessity goods like food, energy, housing, medicine can all be automated.
00:16:52: And which means that the cost of those goods will continue to come down in price, which makes them more and more affordable.
00:16:58: The things that will become more expensive are luxury goods and luxury experiences and things that require pampering or specialists to do.
00:17:06: Those will continue to go, but that's not what's mandatory for people to have a basic happy life.
00:17:12: So I think there's kind of what some people call a case-shaped economy.
00:17:17: Some things will get cheaper, some things will get more expensive.
00:17:21: But the things that are being offered by a social safety net are not the luxuries.
00:17:26: The things that are being offered by the social safety nets are the necessities.
00:17:30: So I actually think that the technology that we are creating will enable us to do this.
00:17:36: In fact, I think there are studies that show just just three hundred billion dollars in one year will essentially eliminate all extreme poverty on this planet.
00:17:44: And that's one Fourth, one fifth, the total, the yearly military spending of just the US, right?
00:17:53: So it's not out of reach.
00:17:58: We just need to have the will to make some of these kind of decisions.
00:18:01: And unfortunately, we make the decisions on the wrong side where actually everybody's increasing their military spending when really all of this is self-manufactured.
00:18:10: All the conflict that we have is being self-manufactured.
00:18:14: We agreed that, hey, we actually already have enough food.
00:18:17: We have thirty percent more food than we have in the world.
00:18:19: They just get wasted because they don't get distributed properly or in time.
00:18:25: So there's the same with energy.
00:18:26: We have lots of energy in the world.
00:18:28: We're just not being distributed in the right ways.
00:18:31: So if we realize that we actually have enough, we can stop fighting and we can start sharing.
00:18:37: We have nearly everything, but not in every place in the world.
00:18:41: It's not well distributed.
00:18:43: So number nine is, yeah, this reframes the fear debate.
00:18:49: Runaway AI and robot armies are not the biggest AI risks.
00:18:53: So what is that?
00:18:55: Yeah, unfortunately, there's too much fear mongering out there about these.
00:19:00: AI is going to kill us because we watched too many sci-fi movies.
00:19:06: But the real danger is actually twofold.
00:19:09: I think one is misuse.
00:19:11: It's bad actors using this thing.
00:19:14: against populations.
00:19:17: It's much easier now to make viruses and chemical weapons because you have intelligence being spread around.
00:19:25: And in fact, a lot of the labs now, they would just take the orders to fabricate these proteins or these viruses, and they don't care.
00:19:36: They just get paid to do it.
00:19:37: So the business is very easy to do this.
00:19:39: And it's something that we need to be spending more time protecting ourselves against.
00:19:44: There's also the governments are doing a lot of the things where we're inciting conflict.
00:19:49: We're creating conflict where there doesn't need to be.
00:19:52: And that can have unintended consequences.
00:19:55: So when you start to build a lot of weapons, then people need to find a way to use them.
00:20:00: Because then how do you replenish them?
00:20:02: You have to build more.
00:20:03: So you have to use it to build more.
00:20:04: And that's one of the reasons we had some of the issues in Europe the last few years.
00:20:10: I think the other part is the economic side.
00:20:15: If the economy breaks, if the jobs go away and we don't have a social safety net, if people stop buying, This actually can lead to the first part lead to war.
00:20:27: because when people lose their jobs the best way to do the fastest way To employ people is to start a war.
00:20:33: because then you can just say hey now you're a soldier and we're gonna print more money We're gonna make more make more tanks and the economy comes comes up and it's a way that has That the the politician will see as a very natural thing and people will excuse them for printing more money because it's war.
00:20:50: so so these these You know, the unintended consequences is what I'm worried about.
00:20:57: I mean, there are definitely some bad actors as well.
00:20:59: So that's the part that we should be thinking about.
00:21:01: That's the part we should be working together to identify bad actors, to identify economic risk, geopolitical risk, and do more the cooperation that we talked about versus the inciting hate and inciting conflict.
00:21:16: Working together.
00:21:17: Let's work together.
00:21:18: Yeah.
00:21:19: But who will do it?
00:21:21: Your final point feels philosophical, but urgent.
00:21:27: You say the biggest threat to our future is our mindset.
00:21:31: What exactly must change in how we think about it?
00:21:35: What do you think?
00:21:39: We touched on it a little bit earlier.
00:21:42: The idea is that right now everybody has been trained over the last thousand years, saying, you know, you hoard power, you hoard money, you hoard resources, you become successful, and then you can, you know, have greater control of your destiny, because we live in a scarce world.
00:22:00: And the reality is that as you, we just talk about, you know, there actually is enough, we just need to accept it, we just need to figure out how we sell it, how we share it with each other.
00:22:09: And until we have the mindset change to understand that we have enough, we will continue to do the things that will make the world dangerous, that will make the world high conflict, that will spend resources on things that we don't need to.
00:22:22: If we took all the three to four trillion dollars a year, we spent on military and put it into food and housing, energy and medicine, many of these issues that we have can be resolved.
00:22:34: So in fact, if there's a little story I want to mention, I think is telling, is that If you look at the caterpillar, we know about the caliber that goes to the to the butterfly, they have a metamorphosis.
00:22:51: But what a lot of people don't think about is what is the program of the caterpillar?
00:22:54: The program of the caterpillar is what we've been doing, is eat as much as you can, grow as fast as you can, and that's all.
00:23:01: it's only job.
00:23:02: Find leaf, eat, grow.
00:23:04: The thing that needs to happen in the middle is that it needs to stop eating, it needs to go into its cocoon or its chrysalis, and it needs to forget that its job was to eat.
00:23:17: It needs to realize that, okay, now I am a butterfly, I need to be able to fly.
00:23:21: I need to be able to change my brain from a caterpillar brain to a butterfly brain.
00:23:29: And then two or three weeks later, it comes out, it becomes a different animal, and everything that made it successful in the past is forgotten.
00:23:36: so that it can now fly away and see other places and just create the next generation of butterflies and caterpillars, right?
00:23:47: And also help create the ecosystem with the planet and with the plants.
00:23:52: And the key is the willingness to forget, the willingness to stop for a little while, stop trying to grow and start to get to the next phase.
00:24:01: And we, as a society, is at that point where we can now say, hey, we've gotten big enough.
00:24:07: We need to stop growing.
00:24:09: We need to stop for a little bit and then restructure our institutions so that we can then get to the next phase and become a multi-planetary species or a world of abundance that we already have.
00:24:24: Well, that's a real big step in the mindset.
00:24:28: We will see if we will succeed with that.
00:24:31: It is very difficult because it takes changing the mindset.
00:24:35: That's what I said.
00:24:36: So if we can change that mindset, it sounds very simple because it's something that we control.
00:24:43: But it is something that has been ingrained in us our whole life.
00:24:46: And if we don't change it, then this amazing technology we're creating, whether it's AI or it's robotics or it's biogenetics, we're going to misuse it.
00:24:56: And we're going to use it as a weapon.
00:24:57: And we will destroy the benefits that it brings us.
00:25:01: From mindset to development, everybody in the eye world is talking about right now.
00:25:07: First is Claude, you have already mentioned it, Claude Bot.
00:25:11: Many still think in terms of chatbots, but Claude Bot, what is it?
00:25:16: Tell us.
00:25:16: Well, Claude Bot is essentially an open source agent that you can download, put it onto your computer.
00:25:22: and it becomes your virtual AI assistant and you can call any model you want with it, you can give it tasks, you can essentially talk to it as an assistant and it will start to do work for you.
00:25:34: You can even link it to a chat bot like a Slack or Telegram or whatever and just talk to it and it will go and find resources.
00:25:43: and be able to complete anything that can be done in front of a screen.
00:25:47: There's certain security risks or privacy risks because you're giving it access to your emails, to your hard drive, to your photos, to your calendars, to your credit cards.
00:25:57: But if you want to do that and some people are doing it, they are finding this a very amazing and addictive product in the sense of it's very... Hard to believe that in such a short time, we now are able to have this, not just something that you pay a lot of money for, but it's something that's free and you can download and just put it onto your computer and have that level of capability available to anyone with a laptop.
00:26:23: But you have to open everything for that, your bank account, your email system, everything you are naked in this world and give it to a system you do not know for free.
00:26:33: So this is the issue.
00:26:36: The free part, you get to control how much you give it.
00:26:38: You can say, hey, whenever it comes to credit cards, I'm going to type it in.
00:26:43: But then when it needs to do things, they'll come and ask you.
00:26:46: So you choose how much level of information you want to expose.
00:26:51: But some people are saying, hey, I trust it.
00:26:55: I will let it do my stock trading, I will let it do my shopping and whatever.
00:27:00: So for the average person, I think it makes sense actually to not necessarily do all of those, to give it all of those permissions, but the ability for it to do that and to do it autonomously.
00:27:12: I think it's really the impressive thing.
00:27:15: and for it to be an open source product versus something that's coming out of a giant lab.
00:27:19: So this goes back to what we said earlier, that the world of AI is commoditizing and you don't need trillion dollars to make useful things.
00:27:29: Okay, that's a good message.
00:27:31: And yeah, at least then China's new foundation model Kimi AI is there.
00:27:37: This is not just another model release.
00:27:40: Many experts say this is China's attempt to redefine its position in the global AI landscape.
00:27:48: What makes Kimi AI special and what are China's strategic ambitions with it?
00:27:53: Yeah, so I wouldn't maybe label it as a China ambition because it is one lab, one of a dozen labs that are very high-performance in China.
00:28:02: But this model is very impressive.
00:28:04: It's a trillion parameter model.
00:28:06: It's a multi-mixer of expert model.
00:28:08: The thing that is special about it is that not only is it catching up to closed source frontier US models, it is in some areas and some of the benchmarks has actually exceeded that.
00:28:22: And then it comes up with this new concept of agent swarms.
00:28:26: You can have an AI that calls and spawns off a hundred other AIs to help it do the job.
00:28:33: So if I need to read a hundred reports, rather than doing it in sequence, which is what the current systems do, it can do a hundred models.
00:28:40: They can read all hundred reports, come back and share information, and you can do it four or five times faster.
00:28:47: than you could before.
00:28:48: And so you could do that with different type of analysis.
00:28:50: And it's also a language vision model, which means that you can give it a picture.
00:28:55: You can give it a picture of a website.
00:28:56: Say, I want to make a website like this, but I want to use it for my company, and they will make the code for it.
00:29:01: So it is essentially on par with the Anthropic Cloud type systems.
00:29:07: We talked last time about Cloud Code and Cloud Co-Work.
00:29:12: It has some of these same capabilities.
00:29:15: But it's a free open source model.
00:29:19: There's a lot going on, a lot of challenge, a lot of change.
00:29:24: And do you have the impression that we are still the people like you and me that we are still in a bubble and that we see a different world than the people on the street?
00:29:37: Well, clearly, this thing is moving very fast.
00:29:40: And clearly, most people are not paying as much attention to it as we are.
00:29:46: And in some ways, I don't know if they need to.
00:29:49: In the sense of all technology takes time to diffuse.
00:29:54: But I think for the people that are watching your podcast, our podcast, they're kind of more on the leading edge.
00:30:00: And if they want to know what's coming, then these are the discussions they need to start having.
00:30:05: But, you know, within two or three or six months, the topics that we're talking about will probably become common topics for the average table, dinner table, right?
00:30:17: Well, Matt, it's with you.
00:30:18: It's a lot of fun.
00:30:19: Thank you very much.
00:30:21: And to everybody listening, if this episode makes you sing, share it.
00:30:26: Follow Alvin, follow Big Bang Tech Report, and see you next time.
00:30:31: Stay curious and stay bold.
00:30:33: And now it's your turn, Alvin.
00:30:35: Yes, and realize that we have enough and stop fighting, stop competing, start sharing.
00:30:43: That's wonderful.
00:30:44: Thank you very much and see everybody soon.
New comment