Crushing AI Costs! From $10B to $4M, how?

Show notes

From brute-force scaling to collaborative AI design – BIG BANG Tech Report #1 Are we leaving the era of “more GPUs, more parameters” and entering a phase where collaborative, efficient, design-driven AI sets the pace? In the first episode of BIG BANG Tech Report, Jens de Buhr and Alvin Wang Graylin discuss NVIDIA’s dominance, Google’s comeback as an architecture leader, China’s efficiency strategy with models like DeepSeek and Kimi K2, and the rise of open-source ecosystems that enable innovation in days instead of months. If you’re looking for clarity instead of hype, this conversation is for you.

Jens de Buhr is the founder, CEO and owner of JDB Holding GmbH. As the publisher behind DUP UNTERNEHMER, he leads one of Germany’s key media platforms for digital transformation in the Mittelstand. As a co-founder of the BIG BANG KI Festival, he is recognized as one of Germany’s prominent voices in the field of artificial intelligence, bringing together researchers, policymakers, and business leaders to shape the country’s digital future. Jens started his career in journalism at WirtschaftsWoche, Handelsblatt, and Capital, before moving into the role of Head of Marketing at SAT.1. In 1997, the East Frisian native launched his own company – today known as JDB Holding GmbH – and has been driving innovation at the intersection of business, technology, and media ever since.

Linkedin: Jens de Buhr www.dup-magazin.de

 Alvin Wang Graylin is a global technology strategist, author of “Our Next Reality”, and Chairman of the Virtual World Society. With 35+ years in AI, semiconductors, XR, and cybersecurity—including leadership roles at HTC, Intel, IBM, and Trend Micro—he has founded four startups and invested in over 100 emerging-tech companies. A Digital Fellow at Stanford HAI and lecturer at MIT, he advises governments and Fortune 100 leaders on AI policy, global governance, and the post-AGI transition. His cross-cultural U.S.–China experience informs his work on international cooperation, safe AGI, and abundance-driven economic models.

Substack: @awgraylin LinkedIn: @agraylin X: @agraylin OurNextReality.com

Von Brute-Force-Scaling zu kollaborativem KI-Design – BIG BANG Tech Report #1 Verlassen wir gerade die Ära von „mehr GPUs, mehr Parameter“ und treten in eine Phase ein, in der kollaborative, effiziente und designgetriebene KI den Takt vorgibt? In der ersten Folge von BIG BANG FUTURE LAB diskutieren Jens de Buhr und Alvin Wang Graylin NVIDIAs Dominanz, Googles Comeback, Chinas Effizienzstrategie mit Modellen wie DeepSeek und Kimi K2 sowie den Aufstieg von Open-Source-Ökosystemen, die Innovation in Tagen statt Monaten möglich machen. Wer statt Hype klare Einordnung und Orientierung sucht, ist hier richtig.

Jens de Buhr ist Gründer und CEO der JDB Holding GmbH sowie Herausgeber von DUP UNTERNEHMER, einer der wichtigsten Medienplattformen für digitale Transformation im Mittelstand. Als Mitbegründer des BIG BANG KI Festivals zählt er zu den prägenden Stimmen zur Zukunft der künstlichen Intelligenz in Deutschland und vernetzt Wirtschaft, Politik und Forschung, um die digitale Zukunft des Landes zu gestalten.

Linkedin: Jens de Buhr www.dup-magazin.de

Alvin Wang Graylin ist ein globaler Technologiestratege, Autor von „Our Next Reality“ und Vorsitzender der Virtual World Society. Mit mehr als 35 Jahren Erfahrung in den Bereichen KI, Halbleiter, XR und Cybersicherheit – darunter Führungspositionen bei HTC, Intel, IBM und Trend Micro – hat er vier Startups gegründet und in über 100 aufstrebende Technologieunternehmen investiert. Als Digital-Fellow am Stanford HAI und Dozent am MIT berät er Regierungen und Fortune-100-Führungskräfte zu KI-Politik, globaler Governance und dem Übergang nach der AGI. Seine interkulturelle Erfahrung in den USA und China fließt in seine Arbeit zu internationaler Zusammenarbeit, sicherer AGI und auf Überfluss ausgerichteten Wirtschaftsmodellen ein.

Show transcript

00:00:03: Hello, everybody.

00:00:04: Welcome to the Big Bang Future Lab, the podcast where we explore the forces reshaping our world through AI, robotics, and exponential technologies.

00:00:14: My name is Jens De Boer.

00:00:16: I'm the founder of DOPE, and I'm the founder and the host of the Big Bang initiatives.

00:00:21: I'm a journalist, author, and a longtime observer of global innovation.

00:00:26: I've spent years interviewing leaders and studying how AI and digital transformation are changing business, society, and geopolitics.

00:00:36: Our mission is simple.

00:00:38: Reduce noise, increase clarity, understand what truly matters.

00:00:42: And I'm really, really happy to have a co-host here from the U.S It's Alvin van Greylen, and it's really good to see you, Alvin.

00:00:51: Yes, my friend.

00:00:52: I'm so happy to be co-hosting with you on this.

00:00:54: We've been talking about it for a couple of months.

00:00:58: I have been in this AI and XR and cybersecurity semiconductor field for thirty-five years.

00:01:03: I've seen multiple waves of industry revolutions in this technology.

00:01:08: I'm currently doing research at Stanford and I'm teaching at MIT and I've written a book called Our Next Reality about essentially how this technology is going to unfold in the next ten years and what we need to do about it.

00:01:20: So I think that's kind of the theme of what we're going to talk about on the show is, you know, what's happening and what do you need to do about

00:01:27: it?

00:01:27: And it's really fantastic.

00:01:29: We have met first at South by South and Austin, and you are a real expert.

00:01:33: And you see everything from both sides, from the US, from the Chinese side, and I support you a little bit from the European side.

00:01:43: What's going on right now?

00:01:45: So we've got all three major continents covered.

00:01:47: Yeah,

00:01:48: we hope so.

00:01:49: And that brings us today to the central theme.

00:01:52: Are we moving from a brute force scaling race to a more collaborative design driven era of AI?

00:02:01: Let's dive in.

00:02:03: And for the last five years, AI progress has been defined by a simple idea.

00:02:12: More GPUs, more intelligence.

00:02:15: NVIDIA has been the engine of that growth, but the ecosystem seems to be shifting.

00:02:22: How stable is NVIDIA's advantage as the industry moves from brute force scaling to a smarter model designed by more collaborative development?

00:02:33: Yeah, that's a wonderful question and it's an important question because right now the entire industry is built on the assumption that the more CPUs you put in, the bigger the models, the smarter they get.

00:02:46: I started in the semiconductor industry in the early nineties and I was actually late eighties.

00:02:51: I was building chips that I was helping Intel create instruction sets.

00:02:55: that became what NVIDIA is using today in their CUDA sets, something called MMX.

00:03:02: What I'm seeing now is that there's actually a shift.

00:03:06: The trillions of dollars that we are now putting or the world is now putting into data centers around the world assumes that you have to make the models bigger to be smarter.

00:03:14: But over the last couple of years, what we're seeing the trend is actually changing.

00:03:18: For probably about the last seven years, five of the last seven years, it was all about scaling.

00:03:24: The scaling laws was the big idea.

00:03:26: Right now, the last one or two years, what we're finding is that actually there's a bit of a descaling law, what I call descaling laws, things like going from from a training time compute to now test time compute.

00:03:39: It means that the longer you run it, the smarter it gets.

00:03:41: So you actually need less GPUs, but as long as you run it longer, it gives you linear growth.

00:03:47: And now with things like distillation, with new algorithmic techniques, with just new ways of architecting these systems, we're able to get trillion parameter model performance on tens or maybe a hundred billion parameter models, which means that you can run it on a lot less hardware and get maybe just one or two percent reduced in reduction in accuracy.

00:04:11: It's well worth the cost.

00:04:13: In fact, right now what we're finding is that all the models now are able to get very similar performances no matter where to train.

00:04:23: That's really putting a wrench into the assumptions that people have had.

00:04:29: Right now for us in Europe and all of the world and video is something it's a well it's in the money machine and it's something everybody relies on and said that's the future.

00:04:40: so you doubt that this will be the future too far for the AI field that Nvidia is not so dominant anymore in the future.

00:04:51: Well, I think it's right now, Nvidia is probably any ninety percent of the training compute in the world, and that's a very dominant market share.

00:05:00: And it's also because in the past, really, there was no real competition, at least on the training side, on the cloud side.

00:05:08: What we're finding now is that you have AMDs coming up with their new solutions, you have Google is coming up with their TPUs.

00:05:17: You have Chinese coming with their Huawei chips.

00:05:20: So there's a lot more other chips that are now able to provide similar, if not sometimes better performance when you take them together.

00:05:28: So from a training perspective, I think it's very difficult for any company to maintain the dominant market share that Nvidia has to end.

00:05:38: If you don't have that dominant market share, you also won't be able to maintain the margins that they have had.

00:05:44: Our net margins are crazy.

00:05:47: I'm doing very well these days.

00:05:48: They're really the only company in the AI industry today that is making money consistently.

00:05:54: In some ways, that sounds great for them, but it's a very precarious place to be when the rest of your customers are not really making money yet.

00:06:03: There is a player in China.

00:06:07: Do you see that China We will have a more dominant role in the future?

00:06:15: because they are very smart, they are doing a very good job and we have had this deep-seek moment months ago.

00:06:22: Do you see in the future maybe that there are Chinese solutions which can be very difficult for NVIDIA?

00:06:29: Yeah, I mean, I think it's not just in the future.

00:06:33: You know, since Steve Seekin in earlier this year, it really proved that a small team with a small number of processors can create a very capable model.

00:06:43: And over this entire year, the gap between open source models and closed source models have continued to converge.

00:06:50: In fact, a week ago, Kimmy K-II thinking was released, and that model was essentially just as powerful as the leading close frontier models.

00:07:01: And it was made by a lab with a couple hundred people and maybe ten thousand chips compared to the hundreds of thousands or millions of chips that the closed labs are showing.

00:07:12: And what that really proves is you don't need giant data centers to train capable models.

00:07:18: And what's even more important is that the Chinese models now are... optimized to work on smaller systems because they just don't have them in their home market.

00:07:31: So you can run that Kimi K-II model on two Mac minis.

00:07:35: Two Mac minis, you can run it at home and get the performance of something that would take racks of servers for the leading Western models.

00:07:44: I think that changes the equation in terms of what people have expected for a very long time.

00:07:51: There is a big problem because of the problem of trust, but not a lot of people trust the Chinese technology.

00:08:00: And I think this deep-seek moment was only for a few months, a few weeks, a moment.

00:08:05: Nobody cares about it.

00:08:07: Yeah, so I think there's the issue of trust in terms of... running the models that are being operated in Chinese servers in China.

00:08:16: But the issue is right now, if you look at what's happening, all the models that are coming from China are actually open source.

00:08:21: And what that means is that they show you how they work, they give you the ways, they give you the papers.

00:08:26: So they're essentially sharing the learning and innovation that's happening, and you can tune it to whatever way you want.

00:08:31: In fact, what's happening right now is almost, I would say, eighty to ninety percent of startups in the world today are using Chinese open source models.

00:08:40: for their operation.

00:08:43: because they can't afford the big closed source models.

00:08:45: It could be, you know, fifty or a hundred times the price to use closed source versus the open source models.

00:08:52: And because it's transparent, because it's open source, you can modify any way you want.

00:08:58: So you really don't have to worry about trust.

00:09:00: You don't have to worry about sending data to anywhere else.

00:09:02: You could run it on your own on-premise or on your own private cloud.

00:09:06: So I think that changes what that a lot of people assume about Chinese models.

00:09:12: The problem about the future is that nobody knows what really happens.

00:09:17: And months ago, everybody told me, well, yes, a month ago that Google will be the loser of everything.

00:09:25: Nobody will use the search machine of Google, and you have to sell all your stocks.

00:09:32: And this is the loser.

00:09:34: But it's the opposite.

00:09:36: Because right now the stock is going well, Gemini is fantastic, and Warren Buffett, Warren Buffett, he is an investor in Coca-Cola, and he's very, very conservative investor.

00:09:49: He announced a few days ago that he will buy or has already bought lots of stocks of Google.

00:09:56: What's going on there?

00:09:57: Yeah, so I mean, Google is actually, they were the main... juggernaut.

00:10:03: They were kind of the open AI of this industry probably five, six years ago.

00:10:09: And they were the ones that came out with the transformer paper, which all of the current language models are based on.

00:10:15: But they didn't really take it out to market, I think for two major reasons.

00:10:19: One is that they're probably afraid of cannibalizing their search market by introducing a use case that might reduce the search traffic.

00:10:30: But the other thing is I think they all also have a little bit more sense of, you know, should we be using this for, you know, are there dangers?

00:10:38: Is there safety issues?

00:10:39: And so, you know, I actually have a lot of respect for Demis Hassibis, who's the lead of the AI efforts at Google.

00:10:46: And I think they've done amazing things and, you know, they created AlphaFoad and, you know, Genome and all these other things that are really more science focused.

00:10:55: science-focused AI models that are not about creating AGI, but really taking AI to create solutions for real-world problems.

00:11:04: So in some ways, I feel like they've been moving slowly.

00:11:08: But they have thousands and thousands of researchers in this space.

00:11:12: So they have a very deep talent.

00:11:13: And they also have their own compute.

00:11:15: So they're not a slave to any other third party vendor in terms of hardware.

00:11:20: And that gives an advantage because they don't have to compete to see if they can get the allocations from NVIDIA for training.

00:11:29: So I think the combination of their strong talent pool, their long-term pedigree in the space and their own hardware, as well as they have huge amounts of data.

00:11:38: They have so much search data around the world.

00:11:40: They have the YouTube data to get videos.

00:11:44: So now what they're doing is with the new Gemini III that just came out, it's actually right now the leading model in the world, Barna.

00:11:52: Clearly, they have outperformed every model with this.

00:11:56: And the assumption right now in the industry is actually they will continue to probably take the lead from now going forward, because now they are actually serious about putting out solutions to the world, whereas before, they were more holding back with some internal conflict.

00:12:14: And for you personally, what do you think?

00:12:16: Who is the winner in this race right now?

00:12:18: We have in the race, we have Meta.

00:12:23: We have Google, we have Microsoft, OpenAI, somebody else.

00:12:28: So I think this is the issue today is that there's a bit of a false narrative of a race condition because people seem to feel that there's a finish line.

00:12:41: And the finish lines, most people think, at least in the Silicon Valley, is they want to get to AGI.

00:12:46: And they think, if I create AGI, I create a trillion dollar machine for making money.

00:12:55: So AGI is artificial general intelligence.

00:12:57: That's essentially the idea that now I can create AI that is as smart as any human, and it can essentially take over all the cognitive labor, and it keeps learning, so it becomes smarter over time.

00:13:09: Right.

00:13:10: That's the idea.

00:13:11: And, you know, and it's now being politicized to become a geopolitical discussion between, you know, is it a race between US and China?

00:13:20: You know, if we slow down, will China win?

00:13:23: And there's a lot of kind of regulatory capture that's happening today where these major labs in the Silicon Valley are going to Washington DC and telling them, don't regulate us, because if you regulate us, you'll slow us down, and China will win, and then we'll lose to China.

00:13:39: And I feel like that's a bit of a false narrative that is actually dangerous for the world, because this technology is very powerful and can be used for good, and we can see that, and it could also be used for bad.

00:13:54: completely take away all the safety guardrails, there's a higher chance it will be used for bad and we're not putting in the right safeguards to keep it safe.

00:14:03: And we're not putting in the right policies to keep people in a safer and more responsible way to use it.

00:14:12: So we need to really reframe this.

00:14:15: race from a winner takes all scenario, which everybody thinks right now.

00:14:19: Whoever gets their first wins everything.

00:14:21: And what we're seeing right now, with all the launches every week, there's a switch in position of who's in the lead.

00:14:31: What that means is, first of all, there is no finish line.

00:14:34: Two is that is it really worth it to spend billions of dollars?

00:14:39: is what Google and OpenAI and Anthropic and all of them are spending on essentially a quarterly basis to try to build these models, right?

00:14:49: And it's unclear because if you build a model within a few weeks, it gets overtaken.

00:14:55: How much does that temporary lead really worth?

00:14:58: And it's very unclear.

00:15:00: So it could be that you spend a lot of money and in three, four months, another will come along and will be a

00:15:08: little... It's actually three, three, four weeks.

00:15:10: It's not three for once.

00:15:12: That's the thing.

00:15:14: And some of these models cost tens of billions of dollars right now to train, tens of billions for one training run.

00:15:20: Now, if you look at what's happening in China, the Kimi K-II model that just launched a week or two ago, it was trained for four point six million dollars.

00:15:26: Right.

00:15:27: So four point six million dollars.

00:15:29: And then, you know, Google and and and and Grock and an open eye spending billions per model.

00:15:34: So this is this is a essentially a hundred to one ratio in terms of costs.

00:15:39: And I'm not sure if that investment is the right place to be putting everybody's essentially the entire tech industry that was built on this.

00:15:51: Right.

00:15:51: In fact, if you look at the American GDP this year, it was about a two growth so far this year and ninety percent of it was based on building our data centers, right?

00:16:02: It's a very lopsided, over-weighted on this particular one type of model of growing intelligence.

00:16:11: But for me as an entrepreneur, if I will use it, it's not only for doing some papers and some speeches.

00:16:22: Sorry, it's a question of a genetic AI too.

00:16:27: How do you see it right now?

00:16:29: Well, I think that the next phase is agentic right now.

00:16:32: You know, the last couple of years has been more the chat model, right?

00:16:34: Essentially, you ask the question, you get an answer back, you put in a prompt, you get a picture back, you know, that type of a model.

00:16:40: Essentially, what agentic AI does is that now you can give it a longer term task.

00:16:44: You may say, hey, do a program that does this or make me a video that does that or help me do stock trading that maximizes my returns.

00:16:57: In fact, there was a contest the last two or three weeks called Alpha Arena.

00:17:02: I don't know if you saw that.

00:17:03: But essentially, they pitted the top six models in the world.

00:17:07: And they said, here's ten thousand dollars of cryptocurrency.

00:17:11: You can trade any cryptocurrency you want.

00:17:14: We'll look at the returns and we'll compare it to just how well is Bitcoin doing as the baseline.

00:17:20: And essentially, after a couple of weeks, right now, I think Quen from Alibaba and Deepseek are the two leaders.

00:17:26: They both maybe make something like ten to thirty percent something that range.

00:17:30: And OpenAI, Anthropic, Google, they've all lost money.

00:17:34: I think OpenAI might have lost the most.

00:17:36: They've lost probably fifty, sixty percent over a two to three period.

00:17:40: So what that's showing is that at least on a real-world task of trying to make money using agents where you're not putting any humans in the loop.

00:17:50: You're just saying, you trade based on what data you're getting.

00:17:53: The very low-cost, free, cheap models that are coming out of China is actually outperforming the US models that cost trillions or billions of dollars to train.

00:18:02: So I think that's a very interesting development.

00:18:07: We talk a lot about the US.

00:18:09: We talk a lot about China.

00:18:10: Do you have any ideas?

00:18:12: Do you see somewhere in Europe?

00:18:15: Well, I think Europe actually is not in as bad a position as a lot of people think.

00:18:22: I was in Europe recently, and a lot of times when I talk to European leaders and corporate leaders, they always are saying, oh, we're still behind.

00:18:30: We regulate ourselves, and now we can innovate, and there's no investment here.

00:18:35: You know, I think we need to separate the idea of invention versus diffusion because It's not real and history shows this.

00:18:44: It's not the company or the country that invents the technology first that succeeds.

00:18:48: It's the country or the company that diffuses that technology to a large number of users as quickly as possible without hurting their own market, right?

00:18:58: If you look at, you know, I spent the last fifteen of the last twenty years working living in China and I've seen the growth of what's happening over there.

00:19:05: And most of the things that are making it successful were not invented in China, but they brought it in, they diffused They put into all the parts of industry and now they've grown their market tenfold plus over the last ten, fifteen years in terms of.

00:19:21: of economic growth and the quality of life.

00:19:23: And now they are the manufacturing capital of the world.

00:19:26: They didn't invent the robots for manufacturing.

00:19:29: They didn't invent the technology that's used for solar cells or for thorium reactors.

00:19:38: But now they're leading the world in these places.

00:19:42: the rapid fast trains that are made in China that they dominate today was actually invented in Germany first.

00:19:50: And they took some of those designs and brought it to China and then optimized it for the local market.

00:19:56: So I think the key is how can Europe take that same idea?

00:20:02: How can they take these AI models that are now open source, anybody can have access to, deploy it to the industry, to the economy without cratering the system?

00:20:13: In fact, I think Europe has a good chance to do it because you have such a good social safety net.

00:20:20: And when you deploy this technology, what's going to happen in places like America is that they will say, oh, I'm getting a fifty percent increase in productivity.

00:20:29: I'm going to fire half my staff or maybe thirty or forty percent of my staff because now I can.

00:20:36: And when you do that, it creates an economic shock to the economy, to the labor force, to social stability.

00:20:45: Whereas in Europe, it's harder to fire people.

00:20:48: So they will say, okay, let me deploy them to somewhere else.

00:20:50: Let me retrain them to do something else.

00:20:52: slower pace of impact.

00:20:54: Maybe I'll give them a four-day work week instead of a three-day work week.

00:20:58: I think those are the kind of things that European leaders will start to think about, whereas American leaders will say, hey, I just had this big increase.

00:21:06: Let me fire a hundred thousand people, five hundred thousand people.

00:21:08: In fact, if you look at You look at Amazon, they announced last few weeks, they're trying tens of thousands of white collar workers.

00:21:15: And then over the next two, three years, they're going to fire six hundred thousand warehouse workers because they're going to bring in robots to do warehouse work.

00:21:22: And then probably after that, they're going to fire a lot of the delivery workers because now they're getting autonomous drivers and a robot to deliver.

00:21:29: So those types of decisions will be much harder in Europe to make.

00:21:34: And I think in some ways it's a better thing for society because Our economy needs time.

00:21:39: It needs time to adjust to these social shocks and economic shocks that AI will bring.

00:21:44: A few days ago, I was in an event with the Vice Chancellor of Germany, Lars Klinkweil, and they spoke about the future, one hour about the future.

00:21:54: Can you imagine that the word AI was not mentioned one time when they spoke about the future and the audience didn't ask anything about it?

00:22:06: because For them, it was, I think, far away that we are living in more or less two bubbles right now.

00:22:12: Do you believe in that?

00:22:13: Yeah, I think there's a certain sense of safety from hiding from what's coming.

00:22:20: Because if you tell people, there's always been more jobs created, and this thing's not nothing to be scared of.

00:22:26: It makes you feel good today.

00:22:28: But it also does something very dangerous is that you stop to prepare for it.

00:22:32: You don't make the changes in policy.

00:22:35: You don't make the economic adjustments.

00:22:39: You don't prepare your staff for this change.

00:22:42: And when that happens, when it does come, it's going to be a bigger shock.

00:22:46: So it's a little bit disappointing to hear that somebody in that position of what you're describing isn't helping make this alerting the the country and the region to what is coming because it's very clear that it is coming.

00:23:02: In fact, the last two months I've probably interviewed about fifty.

00:23:07: plus CEOs of companies who are implementing AI solutions.

00:23:11: This is part of my research I'm doing at Stanford right now.

00:23:15: And I'm asking them, what are you doing with it?

00:23:17: How are you applying it?

00:23:19: What's making it work?

00:23:19: What's not?

00:23:20: And I can see across the board, the companies that are doing it well are getting multi-X, a two hundred, five hundred percent increase in productivity or efficiency or cost reductions.

00:23:32: And their plan is to, at some point in the near future, reduced their workforce, or reduced hiring, or let attrition happen without replacement.

00:23:42: They're going to focus on reducing costs because they can't.

00:23:46: And to hide from that reality is actually, I think, irresponsible for people who are in positions of policy today.

00:23:57: And you know what I think.

00:23:58: it's very important for people like us to talk about it and to bring this information of change and well the chances and the risks to the people and so I'm really happy that we have fear that you are here in the show and.

00:24:14: That was a powerful and a wonderful discussion.

00:24:18: And I think, yeah, every two weeks or every week, we will have a Big Bang Future Lab.

00:24:22: It depends on the information side, what we can talk about it.

00:24:26: And we break down the tectonic shifts shaping the future of AI and robotics with honesty, clarity, and a global perspective.

00:24:36: And so please join us, please come with us, send us some emails and you know the future is accelerating.

00:24:43: Join us in our show and Evan.

00:24:47: What are your remarks, the last remarks, something positive I need?

00:24:52: Yes,

00:24:53: I've actually enjoyed this show and I think we're going to really help people understand this complex issue.

00:24:59: And please like and subscribe and share this podcast.

00:25:03: And if you have any questions you want us to talk about, because I think we are going to provide a honest and transparent analysis of these issues.

00:25:12: So please let us know what you want us to talk about.

00:25:14: We'll try to cover as much as possible.

00:25:16: Thank you.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.