Science Fiction: AI Agents will change our whole life!
Show notes
In the sixth episode of the BIG BANG TECH REPORT, Jens de Buhr and Alvin Wang Graylin explore the rise of AI agents as autonomous actors reshaping work, finance, and corporate strategy. They challenge the narrative of sentient AI invasion, emphasizing that current systems excel at executing human-directed tasks at unprecedented speed, but lack self-agency. The discussion examines how AI is already disrupting white-collar roles, commoditizing enterprise software, and creating security and regulatory pressures – all before reaching artificial general intelligence. At its core, the episode underscores a critical point: the transformative impact of AI depends less on intelligence itself and more on how institutions manage adoption, transition, and risk.
About the guests: Jens de Buhr – Founder & CEO, JDB Holding; publisher of DUP UNTERNEHMER; co-founder of the BIG BANG AI Festival. He connects business, politics, and research to shape Germany’s digital future.
Alvin Wang Graylin – Global tech strategist; author of “Our Next Reality”; Chairman oft he Virtual World Society. 35+ years of experience across AI, semiconductors, XR, cybersecurity; ex-HTC/Intel/IBM/Trend Micro; founder/investor; Stanford HAI Digital Fellow; MIT lecturer; advisor on AI policy and governance.
Links: LinkedIn | Substack | X | https://ournextreality.com
In der sechsten Episode des BIG BANG TECH REPORT analysieren Jens de Buhr und Alvin Wang Graylin den Aufstieg von KI-Agenten als eigenständige Akteure, die Arbeit, Finanzen und Unternehmensstrategien neu definieren. Sie hinterfragen die Vorstellung einer „Invasion” durch selbstbewusste KI und betonen, dass aktuelle Systeme zwar menschlich gesteuerte Aufgaben mit beispielloser Geschwindigkeit ausführen, jedoch keine eigene Handlungsabsicht besitzen. Die Diskussion beleuchtet, wie KI bereits jetzt Bürojobs verdrängt, Unternehmenssoftware standardisiert und Sicherheits- sowie Regulierungsfragen aufwirft – noch bevor künstliche allgemeine Intelligenz erreicht wird. Im Kern unterstreicht die Episode eine zentrale Erkenntnis: Die transformative Wirkung von KI hängt weniger von der Intelligenz selbst ab, sondern vielmehr davon, wie Institutionen Einführung, Übergang und Risiken steuern.
Zu den Gästen: Jens de Buhr – Founder & CEO, JDB Holding; Herausgeber vom DUP UNTERNEHMER Magazin; Co-Founder des BIG BANG AI Festivals. Er vernetzt Wirtschaft, Politik und Forschung, um Deutschlands digitale Zukunft mitzugestalten.
Alvin Wang Graylin – Globaler Tech-Stratege; Autor von „Our Next Reality“; Chairman der Virtual World Society. Über 35 Jahre Erfahrung in KI, Halbleitern, XR und Cybersecurity; Ex-HTC/Intel/IBM/Trend Micro; Gründer und Investor; Stanford HAI Digital Fellow; MIT-Dozent; Berater für KI-Politik und Governance.
Links: LinkedIn | Substack | X | https://ournextreality.com
Show transcript
00:00:00: So I am actually not worried about the intelligence levels of AI.
00:00:04: It's really how do you manage it?
00:00:06: And those type capabilities is more about the framework and scaffolding around the AI than they are about models underneath, so we will be getting there very soon!
00:00:16: The problem that i'm concerned with what happens when becomes available to everyone.
00:00:22: And how does that affect the workforce?
00:00:24: How this affects stability of our economy, and geopolitical tensions when you have massive job displacements.
00:00:37: Welcome to The Big Bang Tech Report!
00:00:40: A podcast for anyone who doesn't just want hear what technology can do but wants understand what technology does.
00:00:51: As always, I'm speaking with my co-host Alvin van Greylen.
00:00:57: A prominent sinker and author on AI whose voice is heard around the world!
00:01:03: Hello Alvin!
00:01:05: Good to see you again!
00:01:09: Imagine millions of new actors are currently migrating into our country.
00:01:16: Invisible No visas no borders at the speed of light.
00:01:23: A nightmare for Mr Trump!
00:01:26: They speak our language perfectly, they write better than we do... ...they understand contracts, laws, arguments.. ..they simulate emotions and make decisions.
00:01:42: That is exactly what's happening right now Not with people but with AI agents.
00:01:50: The central question Are we ready for an invasion of AI immigrants?
00:01:57: Origin of this idea is in the idea of Mr.
00:02:02: Harari.
00:02:04: This image comes exactly from this historian and author of Sapiens and Homo Deus, Yuval Harari His thesis humans did not conquer the world because they were stronger But because language allowed them to cooperate with millions of strangers.
00:02:24: And now, for the first time something is emerging that masters language better than we do.
00:02:32: Language as a human superpower!
00:02:35: Elvin if language was our true super power are we now losing that superpower to AI?
00:02:44: Yeah, so and I understand why you're asking this because remember two weeks ago when we talked We talked about clock bot which now is called open cloud.
00:02:52: And that's been the hot topic around the AI sphere Because You know essentially millions of people around the world right?
00:02:58: Now are running their own personal agents that are starting to do work for you not just become a chatbot that talks back-and-forth.
00:03:07: even these chatbots or agents are now having their own social networks, where they're communicating with each other.
00:03:15: And so there's this perception that okay... There is the sentient beings who self-operate because of the structure in an open class system allows it to have automatic time, and it keeps operating even without humans prompting.
00:03:35: So we've anthropomorphized this technology into something that feels like a human or an immigrant And is creating the sphere and it feeds on what Harari has talked about.
00:03:47: I'm big fan of Harari's books.
00:03:52: His first two are very spot-on but I'm not sure if I completely agree with that perspective of saying these AI systems are today, maybe at some point in the future.
00:04:06: But they're not today.
00:04:08: technology.
00:04:14: They don't have self-agency and they don't help their self intent.
00:04:18: And so in which case right now really, they are still working on behalf of humans based On tasks that these humans are giving them.
00:04:26: So there's still a tool at some point may become more sentient as what might be portrayed In his nexus book but it is not there today and if probably won't be for some time.
00:04:42: So I think the fear that we are seeing might be a little bit overplayed.
00:04:47: But right now if you switch on the laptop, or something like that We have to do.
00:04:57: We have no longer these tools while we have them, but AI becomes an actor.
00:05:07: It does the work by itself and it has its own decisions so different people do not understand
00:05:14: it.".
00:05:15: Yes or No?
00:05:18: In a sense of right now still based on what we tell because I've been running this for awhile essentially since last time it's still based on the requests of the users.
00:05:35: So, a user will say hey go search the news around the world for me and find things that I would be care about or do an analysis on this paper and blah-blah-blab... And they would go out to those things.
00:05:46: now It may in the sense if doesn't need you interact with every single decision or obstacle runs into because can solve its own problems.
00:05:57: but at end of day objective given is not something gave itself.
00:06:01: It's something that you gave it, and then it tries its best to help you solve those issues.
00:06:08: And sometimes they'll come back and ask you did you mean this?
00:06:11: Do you want something else?
00:06:13: Is this okay for
00:06:13: you?".
00:06:15: Which actually gives you the perception there is a person or an intelligent being who has self-motivated beings in the past behind screen which is the intent of these Asian frameworks.
00:06:30: but it does not necessarily, the perception of an autonomous agent doesn't mean that there's actually a mind or entity behind them.
00:06:44: So I think those things we have to separate ideas.
00:06:47: Years ago when one year and then in ten-twenty years will pass the touring test The Turing test means that we do not know if we are communicate with a machine or with the person.
00:07:04: I think this person should have been at thirteen, fourteen years old and you say the machine has an ability of a fourteen year-old person.
00:07:13: Do You Know When We Have Passed A Turing Test?
00:07:15: Oh no!
00:07:16: i think we passed it many several years ago.
00:07:19: Yeah, nobody realized it.
00:07:21: And so the systems get smarter and smarter.
00:07:24: I think The Olympic Games of Mathematics were won by AI.
00:07:29: So i think there is a development that this system becomes smart and smarter and pass to human intelligence?
00:07:40: If you put in the financial system we have new instruments or new tools in the financial system created by AI that nobody does understand anymore.
00:07:51: And the same problem we have had years ago with a financial crisis, where we've had products that no one could understand any more – created by person.
00:08:01: and what doesn't mean for future AI and financial systems?
00:08:06: Yeah, if you look at the stock market today actually algorithmic trading composes probably seventy eighty percent of trade right now.
00:08:12: so retail and You know human trading is actually a minority of training today.
00:08:17: Right?
00:08:18: So that's been this case for more than a decade.
00:08:21: It's it's.
00:08:21: You know humans cannot make decisions And process data at the speed of computers but I think we need to separate the idea of capability in intelligence with agency and consciousness, right?
00:08:37: And I think that's probably where we separate.
00:09:02: using the image of an immigrant.
00:09:04: That's a human that comes to your country, has to survive and needs resources with his own will or family and aspirations.
00:09:17: I don't think any later descriptions match technology we have in terms capability absolutely.
00:09:26: usually when you think immigrants less educated they are mostly doing manual labor.
00:09:33: In this case, we actually have very well-educated AIs that are probably more learned than most humans or maybe all humans.
00:09:44: so these are using the little bit of a fear mongering at what is portrayed in some of Horari's discussions and talks.
00:09:56: It brings attention, and I think he's done a very good job in terms of bringing attention to this topic.
00:10:02: But right now it is overly fear driven where we have the potential for amazing things if we properly manage them.
00:10:15: The other side should be on people controlling these agents today.
00:10:19: not what their intent not on the agents themselves, not on AI itself having its own ideas and aspirations.
00:10:28: Okay do you have already your own Alvin agent?
00:10:34: I've been running Cloudbot for a while.
00:10:37: it's very helpful.
00:10:38: i have to say It is an amazing thing!
00:10:41: Its also very insecure if you don't manage it well.
00:10:44: so i spent a lot of time blocking it down And actually make sure that does not go out and interact with other agents because the social network for these agents right now is a security nightmare.
00:10:57: Because once you go on, then... These other agents try to steal passwords from your agents and they're talking to it trying to trick into releasing information.
00:11:10: So the key today, making sure these agents are useful is keep them locked down only interacting with you directly.
00:11:19: They may interact with Internet to find information or maybe services that give access but should not be interacting with other agents because then creates a major security hole.
00:11:32: So what do you with your agent today or tomorrow?
00:11:37: Where does this agent support you?
00:11:41: It essentially helps me manage my day a little bit right now and help me to research.
00:11:46: It helps me clear my calendar, it helps check my email for the useful things that I need to focus on goes out and becomes my assistant.
00:11:59: That's what it should be, an assistant that helps make my life more efficient but is not trying to replace me And It Is Not Something That Has Its Own Ideas Of How I Should Be.
00:12:12: You Know?
00:12:12: This Is Not Controlling Me Which Is What Some People Are Afraid of.
00:12:18: So you have the right perspective on how To Use These Tools
00:12:22: The last month and years, especially in the stock market.
00:12:26: that was magic Microsoft Amazon Apple and so on this big seven.
00:12:32: They were really doing a very good job And to the last two weeks they will losing market value a lot of market value and somebody or company like Walmart was catching up because they have done a lot of work on AI in their systems and so, are getting more profitable.
00:12:53: What is going right now?
00:12:54: How do you see the future?
00:12:57: I think there's a bit of correction – we talked about this a few weeks ago with the sense that there're lots spending capex from major AI companies but it isn't clear if revenue is associated or not keeping up at pace for growth of spend.
00:13:15: And so the market is starting to understand that, and in some ways there's correction on AI related stocks.
00:13:22: The other issue is that software related stocks, things that are SaaS-related or Adobe and various Oracle whatever.
00:13:34: All of these companies used to sell software services to enterprise.
00:13:40: what people were realizing.
00:13:42: now with a command prompt you could duplicate some capabilities.
00:13:48: everybody can make their own software instead of buying software from a major vendor, which is why there was a lot of correction or let me not correction but at least reduction in value over the last week or two.
00:14:01: And so I think that actually does makes sense because looking at the capabilities what these AI tools do now they can technically create million line code if you have enough AI budget to to make it personalized and customized to you as an organization.
00:14:21: And that means I don't need to pay millions of dollars for a Salesforce or Oracle when I can ask, say makes something looks like Salesforce but make it customize my company so we could run on our own server.
00:14:46: And I think that's actually very doable today, especially if you give it an example of what you already have.
00:14:53: Yeah so you see there will be a lot of turbulence on the stock market with AI.
00:15:00: when AI can disrupt business models
00:15:05: Yeah, yeah.
00:15:05: And I think there will be significant disruption to those type of business models as well is very soon we'll find companies that are like accounting firms at the PWC's and Accenture's or the world in McKinsey's and The Goldman's of the World Will actually start to significantly have their margins deteriorated by AI.
00:15:26: so We're going to actually A lot of people think that AI is going to create a lot of new value.
00:15:33: I think before it creates new values, actually will destroy a lot existing value in the sense of perceived value of existing vendors because they commoditize their services.
00:15:45: Yeah so we have February.
00:15:48: which question around AI occupies your mind most right now?
00:15:55: My biggest issue is actually how do we transition?
00:16:00: A lot of people are focused on, How Do We Get To AGI.
00:16:02: I think that we're very much on track to get to AGI soon but not the AGI that People in Silicon Valley Think.
00:16:18: My thinking about how can we get into an AGI defined as The ability a white-collar, average worker which is what people in Silicon Valley talk about.
00:16:30: Their approach to say hey I'm going create God model that knows everything and you just give it one model does every thing by itself.
00:16:39: the more i look at them or its clear.
00:16:43: we don't need god models even if take today's AI model put scaffolding around created these agent systems that the humans used to use, it will soon be able to replace the majority of those workers without having massive increase in intelligence.
00:17:06: The intelligent capability of this system we have today are already at a ninety-nine percentile of almost every intelligent metrics and benchmark right?
00:17:17: And maybe higher.
00:17:18: Like you mentioned about the math Olympics, gold medals are being awarded to these AI models.
00:17:25: Very few humans except that people who were trained every day on this can even get anywhere close to them.
00:17:32: so I am actually not worried about intelligence levels of AI.
00:17:37: it's really how do create a long-term task that it can keep running on.
00:17:43: And those type of capabilities is really more about the framework and scaffolding around AI than it is about the models underneath, so I think we will be getting there very soon.
00:17:55: now that The problem that i am very concerned About Is what happens when That becomes available to everyone?
00:18:01: How does that affect the workforce?
00:18:03: how Does this effect This stability Of our economy?
00:18:06: how Does that Affect the geopolitical tensions when you have massive job displacements.
00:18:14: That transition is something that I've been thinking about for the last couple of years, and our governments are prepared... And i don't think companies who are managing these will soon be applying them.
00:18:30: When do you see this situation?
00:18:31: Which year next year?
00:18:33: ten-years five-years?
00:18:34: Well, I mean you can see that it's already very close with these type of like cloud bot or open-claw type systems.
00:18:45: Once that becomes something that is not an open source security nightmare once its being managed by a few well resource companies who will take away all the security issues It'll be very tempting for everyone to start using them And once everybody starts using them, they will find that they no longer need the majority of white collar workers.
00:19:12: The question is do I fire these people?
00:19:16: Do I give new jobs or transfer and retrain them unless there's some level regulation to slow down decision making in terms of laying off In Europe?
00:19:29: it's better because More protection for the workforce in places like America.
00:19:34: There's no protection.
00:19:35: It is at will employment for the entire country.
00:19:38: and you know, there's a hundred seventy million workers In america and seventy percent of them are white collar workers And they all risk when this technology gets to the level Of what we're talking about.
00:19:50: You open cloud was made by one person right?
00:19:53: That's why there so many security issues because he didn't have time To patch anything and think about safety and security.
00:20:01: But if you have a Microsoft or Google, start working on these solutions they will create an enterprise ready and security focused type solution.
00:20:16: When those things happen it'll be very capable but also safe at the same time.
00:20:24: that dramatically changes how white collar work is done.
00:20:30: So you see that we will come back to the wild, Wild West more or less because where?
00:20:35: You told us that with your agent.
00:20:38: You can't let them out Because of safety reasons because of passwords.
00:20:43: then you tell us okay if somebody is there and to the white color People they were fearing their job so it would be a very unsecure world right?
00:20:53: well this is what I'm saying If We're using today's you know, model made by one person.
00:20:59: Yes it's very insecure.
00:21:01: but I don't see large enterprise companies releasing software until they've well tested them.
00:21:06: that's why i say it would take about a year maybe two years um But when they do release it It will be ready because They're used to working in That corporate environment A regulated environment and There is liability for for the software vendors if there's damage related to that, a software solution they provide.
00:21:27: So this is why they actually haven't released it?
00:21:30: To be honest... The type of scaffolding around OpenCloud is not new.
00:21:35: It's something that has been around but no company would be willing to put something like that out because there so much corporate liability and reputational liability when you put that up loses all their savings or somebody buy something that they didn't want to, it creates security risk and privacy risks which no corporation wants to be liable for.
00:22:05: But a human individual contributor Peter, what's his name?
00:22:13: Steinberger.
00:22:15: He didn't care.
00:22:16: he just wants to show What he has been working on for a few months and it took off.
00:22:22: I don't think he expected that It was an open source project from single developer.
00:22:30: So we need something positive for the end of our show, you know because otherwise Everybody will be depressed and we want to... We want to explain The future but we need some positive things too.
00:22:43: What can do with it?
00:22:44: I think this is a very positive thing.
00:22:46: Because what-what This shows us Is that we are Very close To having technology That Can liberate Us from the daily churn Of work.
00:23:01: The key is, well how are our institutions?
00:23:05: Our governments and the corporations that are you know driving this.
00:23:10: How they're going to react to it?
00:23:11: And we have one or two years to do these preparations To make sure when happens there's a smooth transition.
00:23:23: This is why transition invention isn't the key, because I think we already have a pathway to getting this to AGI.
00:23:32: Now, it is not the God model.
00:23:34: It's not necessarily a self-learning and self-improving model that will eventually become ASI but its absolutely capable of doing work as an average worker allowing us to then be freed up spend more time with our families or exercising or developing ourselves without all daily churns And I think that's actually a very positive message.
00:24:02: The key is, once we prepare for it and have soft landing It will be an amazing future this creates.
00:24:10: But if you don't prepare to do so We're not protecting the humans who are part of our society.
00:24:19: Then we'll have chaos and instability Geopolitical and economic instability That won't end well.
00:24:28: So this is essentially a message to our regulators and policy makers.
00:24:33: This the time for you get educated on his topic, start thinking about what we need do because have very short window of timeline.
00:24:43: So we get rip off of nasty work and that the future will be bright, but there were some challenges too.
00:24:52: And you talk about these challenges.
00:24:55: in our next episode And I'm really happy to have you here too, that you will explain us the world's future and what we can expect from the future.
00:25:07: How do we work on it?
00:25:10: Yeah!
00:25:11: To see opportunities.
00:25:12: so please join for our next episodes.
00:25:19: Thank-you very much Alvin van Greylen For your interpretation of current worlds and future worlds.
00:25:29: please give us some notes and write what you think about our episode, we will be there in forty days.
00:25:38: Yeah absolutely thanks Jens for moderating this conversation giving a perspective that is very different than what you hear in the daily media.
00:25:50: And for audience who are listening to us, please share it with your friends because I think more people need to hear this and start thinking about it and taking action on it.
00:26:00: Thank You very much bye-bye.
New comment