PU 15: State Secret or Vibe Physics?

A recent patent application that made the news in Germany looks suspiciously like someone with chatbot delusion tried to vibe code a threat to NATO's nuclear deterrence strategy.

Uber founder Travis Kalanick, inventing the term “vibe physics” (All-In Podcast)



Download Episode

Subscribe to podcast:

Spotify Apple Podcasts Pocket Casts RSS


Podcast Transcript

This is Punching Upwards, episode 15, for the 21st of December, 2025: State Secret or Vibe Physics? Broadcasting from Düsseldorf, Germany and working throughout the Christmas holidays, as usual, my name is Fab and I’m your host. Welcome to Punching Upwards.

In this episode, I’m looking at an interesting case of a recent patent application that the public media here in Germany reported on. I will give my theory of what actually happened and what AI has got to do with it. On Monday, German public broadcaster ARD published this somewhat amazing story about some random German dude who applied for a patent for an invention and then had this invention declared a state secret. But later it was declassified and the patent office even said they’re not sure it’s an invention at all. What exactly happened there?

Baran D. aus Ostwestfalen hat etwas erfunden. Was genau, darüber spricht er nicht. Und das hat einen Grund. Seine Erfindung wurde, zumindest zeitweilig, durch das Bundesverteidigungsministerium als Staatsgeheimnis eingestuft. In fremden Händen erlaubt dieses Patent die Ertüchtigung gegnerischer Kampffähigkeit, unter Umständen sogar die nukleare Abschreckung der NATO, woraus ein schwerer Nachteil für die Sicherheit der Bundesrepublik Deutschland entstehen würde. So heißt es in einem Brief, den Erfinder D. im Juli vom Deutschen Patent- und Markenamt bekommen hat.

So that is part of the ARD report. And so this is the story, according to the journalists from Tagesschau, which is the main primetime news program from ARD. So they’re saying this is what happened. A man from eastern Westphalia, who’s only identified as Baran D., spent many hours in his free time looking at scientific papers on his home computer. Suddenly, he had an idea. He developed it into a paper of his own and showed it to a friend, a scientist with knowledge in the fields of physics and engineering. And this friend obviously thought it was a valid idea. After this, he handed in his findings to the Deutsche Patent- und Markenamt, the DPMA, or the German Patent Office, in March. Mr. D, as I’m going to call him from now on, isn’t talking about what his invention was. But the Tagesschau report insinuates that it has something to do with weapons development, possibly nuclear secrets.

Several months after filing for a patent, the patent office got back to him. More specifically, he got a letter from a special department called Bureau 99, which deals with patents that could impact national security. Bureau 99 told Mr. D that his patent application is classified as a new invention and therefore eligible to be patented but that it has been classified as a state secret. This means that you can’t talk about what you invented, build it or sell it or otherwise apply for patents in other countries. Basically, you’re screwed. The state takes your idea now. Bye bye.

In the justification, the office, this Department 99, said that in foreign hands, the patent would allow the strengthening of enemy combat capabilities, enabling them to overcome German defenses and under certain circumstances even jeopardize NATO’s nuclear deterrent, which would result in a serious disadvantage to the external security of the Federal Republic of Germany. The invention must therefore be kept secret from foreign powers.

But then, a plot twist. After the ARD journalist contacted the patent office about the case, Mr. D got another letter with some questions. Did he use AI tools in his patent application? And did he upload some of his documents to a cloud? Mr. D answered that he did indeed use AI tools in preparing the documents for his patent application. The patent office then replied that because of the use of AI, confidentiality of the invention could not be guaranteed anymore. So now the classification as a state secret would be revoked because of that. But they also said that they can’t verify that the invention did actually work. So far, said the patent office, his ideas are only based on claims without any scientific proof.

Mr. D is somewhat angry about all of this and pulled his patent application in Germany. He now wants to apply for patents in the US or in Canada. Now, in the legacy media reports about all of this, all of this is painted as a huge mystery. But these journalists all make a fundamental mistake in their analysis, I feel. They all assume that the invention is legit in the first place, just because the patent office sat so initially. And even the ARD people didn’t get the inventor to talk about what exactly he invented. That’s quite obvious. So how can you have useful reporting of something like this when you don’t even know what the underlying invention was?

As it’s now so often the case, some lay people on the internet came up with a much more plausible theory than what was explained in reporting by ARD and other big media outlets. The theory goes as follows: It is very unlikely that some random dude comes up with a breakthrough invention in a highly specialized field like weapon systems design, just by reading some papers at home on his computer. Patents that get declared estate secrets are very rare in Germany, and virtually all valid ones come from big defense contractors, the military itself, or niche startups staffed by people who used to work for the former too.

So what I think is more likely here, as several people on the Internet have pointed out, is that this dude used AI to vibe code the patent. And the patent office was too stupid to notice. Then this Bureau 99 got hold of it and didn’t notice either and then classified it. Maybe someone along the line somewhere noticed that this was based on AI bullshit, but because the thing was now a state secret and even the inventor wasn’t allowed to talk about it anymore, they thought the embarrassing slip-up would just stay buried. But then the press got wind of it and asked about it, and they realized they needed to do something. So they made sure to verify that the guy used AI, and when he admitted it, they came up with the excuse that in that case it wouldn’t be a secret anyway. The side remark that they didn’t even know if the invention would work in the last letter kind of points to this.

They are of course technically correct, and it could all have happened at face value like the ARD report makes it sound. But I think the Internet is right. The patent office got caught flat footed by some AI patent slop. And now they’re trying to cover for the fact that they didn’t initially turn it down. And they are, of course, technically correct because their point that at the point where you use AI tools to help in something like this, this data will be on the AI company’s servers. And in this case, let’s say this guy used ChatGPT, then the data would be on open AI servers. So you can’t make it the state secret because it’s already known to people outside the state. And as is well known, these AI companies give information like this. They give access to this information to intelligence services in the US. So you would have to assume that they already know this information. So that is technically correct.

But I personally think they use this also just to cover up what really happened. I think that what happened here is a further symptom of the kind of AI-induced delusion that has swept our society from tech billionaires going on Joe Rogan to explain how AI will change absolutely everything to normal everyday people going absolutely delulu because they talk to chatbots too much. And to make this point, here’s some excerpts from a great YouTube video by Angela Collier, a theoretical physicist, who takes Uber founder Travis Kalanick to task for coining the term vibe physics. This video is from July, but I think it’s very relevant to the story. First, she recaps what Kalanick said and then explains what vibe coding is.

So there’s this podcast called All-In where a bunch of venture capitalists share their thoughts and feelings. And on July 11th 2025, so I’m a little late to this video, Travis Kalanick was a guest and this guy is a billionaire because he invented the taxi cab except in his version the workers don’t have rights or health insurance or anything. On this particular …

That’s of course we’re talking about Uber … I kinda like that take.

On this particular episode, Travis said this: Something else I’m looking into, and I’ll go down this thread with GPT or Grok, and I’ll start to get to the edge of what’s known in quantum physics, and then I’m doing the equivalent of vibe coding, except it’s vibe physics.

So he says … I will get to the end of what’s known of quantum physics. And doing that, he has gotten close to physics breakthroughs. Wow. Oh, my God. Wait. What was that?

Vibe coding, except it’s vibe physics.

So vibe coding is a new term. I think it’s February of 2025, but it’s been hugely popular. Let me read you the quote from the guy who coined this term, Andre Karpathy. Here’s what he says vibe coding is. Fully giving in to the vibes, embracing exponentials, and forgetting the code even exists. Which, I mean, sounds like a bad way to write code, actually. What you do is through text, like English language, not computer code, is ask a chat box to write some code to do something. And then you test it just by running it to see if it does what you want it to do. And then you kind of iterate again through discussion, English words and not code language.

Just you do this until it gives you what you want. This is vibe coding. And an important component of vibe coding is that you do not know or want to know or even attempt to understand what the code does. Like you asked for code that does a thing. The only thing you’re testing is if it does a thing. You don’t go back through the code, read it line by line and try to understand how it works. That’s not vibes. Vibes is just having the chat bot do it until it does what you want.

So I’ve said that the AI tools are all well and good as long as you are an expert in the field you are using it for and you have the ability and knowledge to verify what is coming out. And vibe coding is like the opposite of that. You requested the LLM to make a thing and like a fool, you’ve just taken that output and been like, it does the thing. Without opening it at all, without trying to understand how it’s doing the thing, you’ve just made a little black box that you don’t know what’s inside that box, but it does a thing. You vibe-coded it.

And you could think of how bad this is from an energy consumption standpoint. You’re using all of this energy to do a thing that you could just try to figure out yourself. You could think about how bad this is from an algorithm standpoint. Think bias. These AI tools are fed data from the world, which of course has bias in it.

But I want to talk about security just for a second. Imagine you asked a LLM to make a hot or not app for iOS and you iterate on it for whatever because you’re a vibe coder and you’re just super good at prompting. Oh my God. And it makes an app and you just upload it to the app store and people buy it, right? You don’t know what’s in the app. You don’t know what it’s accessing. You don’t know what its permissions are. You don’t know what it’s taking from each user’s phone, but you are responsible for that information. You are responsible for knowing how your code works and vibe coding means that you don’t, which is bad.

I mean, she raises some good points here, but the point she doesn’t address is, I mean, this could all be construed as accidents. But if you’re actually talking about software security, the biggest problem would, of course, be backdoors that the LLM builds into your code for what reason? It could be an accident. It could just pick up a backdoor actually from the corpus, from the information it learns, but more likely is somebody just either attacking the LLM to, you know, attacking the training data to have backdoors built into the code that the LLM will make for people. Or even the company, like the company that runs the model, actually building backdoors into your code, maybe at the behest of some intelligent service. So that would be the most obvious, like even the biggest danger that she doesn’t even address.

So no, of course, it’s not a good idea to do vibe coding. If you do software development, It behooves you to understand what your code does. That is your job. It’s like why it’s a bad idea to use an LLM to do journalism that wouldn’t be ethical because as a journalist, it’s your responsibility to research a topic. You can’t just have some little black box research topic for you, then believe what it says and publish an article. I mean, you can do that, but that is very bad. And you are responsible for knowing how your code works. And vibe coding means that you don’t, which is bad.

I don’t think I would want to buy an app from someone who doesn’t understand how the app was coded. There’s also the …

I wouldn’t even, let alone buying an app, you wouldn’t want to run that app in the first place.

There’s also the issue of vibe coding from the training perspective. Like you are specifically and intentionally not learning anything, right? You are prompting, rearranging questions until you get an output you desire, but you have no idea how that was done. You don’t know how the code works. If you are some inspiring software developer, spending all of this time not training for the field you’ve chosen. You are not learning anything. You are not improving your skills. You’re basically doing what anyone could do with an LLM, which seems bad for like your career prospects and how you plan to go through the world. Like you are specifically and intentionally refusing to learn anything, which just seems like a problem.

There have been a bunch of studies that have been published recently because this whole AI thing is very, it’s very young compared to software development as a whole. And a lot of this research is saying people who use AI tools end up relying on them. They’re not saving any time, they’re less productive, their co-workers don’t trust their work as much. So by vibe coding, you are specifically preventing yourself from learning, preventing yourself from improving, you are dampening your skill set, you are lowering your future income prospects, and it seems like a dumb thing to do. So that’s what Vibe coding is.

And this is why vibe coding is a bad idea. But now Angela is going to talk about vibe physics. That’s the thing this Uber CEO, the idea he coined in that podcast. And that is an … like exponentially worse idea and she’s gonna as a physicist she’s gonna explain why it’s such a bad idea and why it’s like completely dumb and embarrassing to even talk about it. But we will later then get to why this is still important so um yeah back to Angela here.

Now we know what vibe coding is. What is vibe physics? Presumably Travis is implying like the same definition, right? Fully giving into the vibes, embracing exponentials, and forgetting the physics even exists. He’s gonna get to what’s known about quantum physics and push beyond that via text communication with an LLM.

And we’re approaching what’s known, and I’m trying to poke and see if there’s breakthroughs to be had. And I’ve gotten pretty damn close to some interesting breakthroughs just doing that.

I don’t like the idea of vibe coding. I think that’s bad. But I can understand its use case. Like, if you’re not an aspiring software developer, and you just want to, I don’t know, send your friend an email with an executable … And when she clicks it, it makes a sound that’s like beep.

You could do that by vibe coding, right? You could send a prompt to an LLM that’s like create an executable that opens a GUI that has a button on it that says push me. And when the user clicks on that button, it goes beep. And then you could integrate it, right? You could be like, oh, I’d like the lines to be more squiggly. And I’d like the text to be black and the button to be red. And then eventually you would get to something that works and you could send it and your friend gets the email and she clicks on it. It’s like, beep. And it’s fine.

Like … You’re not trying to understand how that works. You’re not a coder. This is not your job. You just wanted to make a little thing. You could totally vibe code for that. Like, that’s fine. That would totally work. And of course, using an LLM to do that would be way faster than learning how to do like coding with a graphics package and downloading …

I’ve already cut this together quite a bit. If you hear cuts in that, those are not mine. Those are in the actual videos. I hope my cuts are not noticeable.

Using an LLM to do that would be way faster than learning how to do coding with a graphics package and downloading a text editor and all that stuff.

I understand the use case of vibe coding, but vibe physics doesn’t make sense. I mean, you can use LLMs to solve physics problems. Like, you could go and type in, you push a 10 kilogram box across a table with the force of 64 newtons and it takes four seconds to stop. How far did it go? And I’m sure the LLM could solve that problem. I maintain that you will have to check it and I also maintain that that will absolutely not teach you physics because watching someone, even if it’s an LLM, solve a problem is not how you learn physics. You actually have to do it yourself and use your brain, but like it’ll give you the right answer probably for something simple like that, but you have to check. Doing physics like professionally is not about getting the right answer. Doing physics is about developing an understanding.

This is a very important point, by the way. That’s the general point about science that many people get wrong. The whole thing about, you know, not getting an actual answer, because the actual answer in a field like this will always change. You’re only approaching what reality actually is with a theoretical framework or a concept. So it’s not what maybe you could, as she says, maybe you could use an LLM for is getting an answer, but that’s not the point here.

Doing physics, like professionally, is not about getting the right answer. Doing physics is about developing an understanding of the physical world, right? Vibe coding specifically requires you not to understand what the code does. In its definition, vibe coding means you’re just trusting the LLM to make the product and you test the product and if it works, you’re like, we did it. You’re not understanding what’s in that code. It is a black box. That is not how physics works. The whole point of doing problems and running simulations and doing experiments is to understand physics.

Vibe physics as a concept does not make sense. Let me give you an example of an unsolved problem in physics. Turbulence. You know turbulence, it’s like turbulent flow it’s fluid dynamics but it’s chaotic and you can’t or we don’t at this moment have a system of equations that perfectly describes these systems we can predict the onset of turbulent flow like when that will happen the time scale but then you get these little vortexes and like heat is transferred in a way that’s not intuitive and we just we don’t understand turbulent flow. It is an unsolved problem.

Imagine you’re a software developer and you have created a machine learning tool to look at videos of clouds. So like atmospheric weather is an example of turbulence sometimes. And you have a thousand videos of turbulence developing and you give it to your machine learning algorithm and it analyzes all this data and it develops a model of predicting when turbulent flow will happen. You have like a thousand different movies that haven’t been used to train the model and you use that to test your model and it’s great. You can predict when the turbulence will happen, you can predict the scale or the magnitude of it, how long it will last, and you use this to start making a weather prediction model. So you take what the clouds look like now and you’re like, will we tornado? And it works. It works. It’s amazing. You’ve done it, right? You understand turbulence now. No, no, no, you don’t, right? Because all you have is a algorithm that takes data and tells you some information.

You don’t understand what is actually happening inside the clouds. You don’t understand the physics of the situation. Like you’re basically just running a simulation and saying like, yes or no, tornado, right? If you want to understand it on the basic physics level, you want equations of motion. You want to be able to calculate every single time step. You want to understand the underlying mechanisms that are causing the turbulence and how the different …

This is also a very important point that I think many people don’t get and especially like journalists and reporting about science often don’t get that there is … like most people think you do science to get to an answer and that’s part of it like you have a question you want to get an answer, but the real challenge is … And the real way to validate what you’ve done is to explain it. You need to explain how you got that answer. Just getting an answer isn’t enough. And there’s often these days I feel there’s this … people, I don’t know, they probably learned it in school that way. They think that the scientific process is there to give you an answer. And while that is kind of true, that is kind of, in the answer itself, it’s kind of worthless.

Let’s say you do medical research and you’re trying to predict cancer from people’s genes. It is helpful to have that. You want to predict that. But what you really want to do is understand what’s going on. You just don’t want a machine that gives you, you read in, some data and it says well this will cause this person will get cancer because they have these kinds of genes. What you really want is you want to understand how it all works you want to understand what the genes do what they express how that interacts how the body interacts and what actually causes that cancer because from there on you can then further understand … your goal is to understand the whole system whatever you’re studying. Like the planet’s atmosphere, black holes, history or the human body. The idea is to understand as much of the whole system as you can. And you can only do that by understanding how you arrived at this answer. Just the answer in itself is kind of only halfway and it’s not really even halfway.

And the danger is if you only go there, you cannot tell if that’s the correct answer sometimes. Sometimes everything will look right. It looks like you went through the scientific method. It’s all great. You get this answer, but the answer might be wrong. And the way to judge how likely it is to be right is to get more of the understanding. How did I get there? This is why computer models can be really helpful in science, but they’re also useful. Sometimes a problem because you’re not looking at the real world. You’re simulating the system. Let’s say the Earth’s atmosphere. You’re simulating that. And you can get answers that way. And there might be correct answers. And there might be correct answers in 95, 99 percent of the time that might be correct, but you’re not actually understanding the system because you build a simulation. You might understand your simulation, but there might be factors that are in the real world that are not in your simulation or factors that are in your simulation that are not like this in the real world. So there is a danger there that you think you understood something, but you really didn’t because you didn’t …

You might get right predictions, but they might just be by chance, or you might have it part of it right but not the whole thing and then sometimes it falls apart in the very crucial like in that edge case and really when you’re getting to like really hard things like for example putting things into orbit or flying to the moon or whatever you really want the understanding you just don’t want an algorithm that’s right 99.5% of the time because that half percent you might actually hit that and your space shuttle might explode, right? So you really want the other thing. And I think she explains it here really nicely. I mean, she obviously, she is a scientist. She does that every day. She has this understanding. But a lot of people … don’t have this understanding. And a lot of journalists don’t have this understanding because when they report on these matters, on scientific matters, they often are satisfied with the answer. You know, you have a question, you get the answer and they’re not digging deeper. They don’t have that scientific mindset.

We’re like, yes, this is nice, but why? Like what we really want to know is why, why do we get that answer? And I like how she, how she approaches this here.

You want to be able to calculate every single time step. You want to understand the underlying mechanisms that are causing the turbulence and how the different parameters influence the development of turbulence. So you take your code and you’re like, hey, how are you predicting this? Give me the equations you’re using. Okay. It will do this. Your machine learning tool will give you the equations it’s using. Great.

So now you did it, right? You have an equation. You understand turbulence, right? You did it. No, no you didn’t because this equation is going to be a huge sum of exponentials. It’s going to have like 175 terms and you will not understand and you cannot get this information from your algorithm how it developed this equation. It didn’t derive it. It did an algorithmic iteration until it arrived at it. So this doesn’t tell you anything about the underlying physics. I mean, maybe you could use your machine learning algorithm for some system and it gives you an equation with like eight terms and you could try to work backwards and Frankenstein an idea of what the underlying physics is from that. But problems like turbulence are just non-tractable. You get 175 terms, you don’t understand the underlying physics, okay? Vibe physics doesn’t make any sense. These long-standing physics problems like turbulence are non-tractable in an analytic way, which is why they are unsolved problems.

And people already do this, by the way. This is like a 20-year-old example of machine learning, but that’s fine. The goal of physics is understanding. The goal of vibe coding is producing a product. Vibe physics doesn’t make sense. And I think I would, you know, further this and say the goal of science is understanding.

Not only physics, the goal of all science is understanding. Okay. And I would argue that vibe coding is probably a bad idea too, but that’s fine. So she explained, I think, very nicely here why this idea of vibe physics is a bad idea, really bad idea. But why do I even care about this? Because I think the guy in this patent application did vibe physics. That’s what’s happened there. I think this whole problem of people using AI in ways like this is actually a lot more widespread than a few delusional billionaires. They’re just the ones who get the platform to talk about it because, you know, they’re billionaires.

And in a great rant in this video, going further in this video, extrapolating from the crackpot emails she gets because of her YouTube exposure, Collier expands on what she thinks the wider problem in society is. And I think this is also great analysis. I don’t think that’s the whole issue here. I think there is an underlying problem.

Crackpot emails have changed a lot in the last 18 months. There are a lot of people now using their conversations with these chat boxes as evidence that their theory has merit. They will say things like, I checked two different LLMs and they both said that this was a devastating physics solution and it’s going to topple quantum mechanics and it’s just …

And I don’t know why she says chat boxes instead of chat bots, but let’s just roll with it.

I don’t usually respond, but if I do, I respond with something like this: ChatGPT is a magic box that will agree with whatever you say, which includes making up answers and incorrectly doing math. If you want to be a physicist, you must learn physics. There is no cheat code. ChatGPT will not teach you physics. And then the weirdest thing happens. They totally agree with me. They agree with me 100%. Say, of course I know about the hallucinations. Of course I know that I’ve had issues in the past where I’ve had to correct it on basic math, but like, now that we’ve moved beyond the physics I understand, it’s amazing, right? Look at it. There must be something there. The chat box said so. Our boy Travis does the exact same thing, right?

According to Travis, he is an amateur physics enthusiast, okay? He was chatting with the chat box about basic physics and it made fundamental errors, which he corrected. And then he says: I know these things can’t come up with anything new. However, if you pull it like a donkey, you can get to the edge of physics. And even though he knows these three things, he says these three things, he’s still under the impression that ChatGPT is somehow doing this amazing thing. It is somehow pushing new physics. New physics is going to happen. It’s going to be new physics. Imagine if grad students had a chat box.

Imagine you’re chatting with a chat box and it doesn’t understand friction and you correct it and it’s like wow great thank you so much and then you ask it if gravitons exist and you’re like whoa what an amazing answer oh my god it’s so smart and it’s like but you just had to correct it … What? How? … Do you see the dissonance? Everyone who uses these chatboxes knows that they make stuff up. They have seen things that are just blatantly incorrect. It says things that are wrong. But also it’s amazing. Oh my god, it’s so scary how good these things are going to be at physics. Oh my god, and we don’t even need scientists anymore. It’s like a real Terminator situation. And it’s just like, how do you get there from the first two things?

Well, I have a thought train and it’s just my opinion. So like dip out now if that’s not your thing. So I think an overlooked component of AI discourse is how little respect AI enthusiasts have for people who create things and do stuff. It’s most obvious in the art space, I think. Like, they spent the 2000s just, you deserve to live in poverty because you wanted to make art. Why did you get that underwater basket weaving degree? You should have went into tech. Like, how dare you think that you deserve to be paid for your drawings? Boo, right? But now, suddenly, there’s a tool. that can make art and now they want to get involved now they want to make art? Like art in quotation marks because it’s garbage and I don’t want to see it I don’t want to see your stupid AI bullshit.

But suddenly there’s a thing that they can use to make art without any effort without any time having to like study the classics to learn to hone their skills and now they want to do it and it’s the same thing with physics do they want to spend eight years doing math and problem sets Do they want to be sitting in a lab at 2 a.m. fiddling with components trying to figure out why the signal says like 8.4601 kilohertz instead of 8.392 kilohertz? No. No, they don’t. But now there’s a little chat box they can talk to that gives them compliments and tells them how smart and beautiful and wonderful are and suddenly, suddenly they want to do physics.

Do you know what I mean? Suddenly, in their imaginations, but not in reality. They have all the same skills that all the people who work so hard to train so hard to do the things and all they need to do is just have the right prompts. The skills are at the tip of their fingertips. Oh my goodness.

They can be a famous theoretical physicist and they don’t even have to learn calculus. They just have to vibe physics and pull the LLM like a donkey to get to the edge of what’s known. And do they need the ability to check their answers? No. They did it and then you know after they’ve done this and they’ve presented it to people people will just be like well no that’s not art i don’t want to look at that no that’s not a physics paper just because like technically it looks like a physics paper. In classic crackpot fashion they are angry. They are angry that you are not respecting them and the way they think they deserve to be respected because even though they did not attain the skills through the traditional methods suddenly they have them because they’re really good at prompting the donkey to the edge of physics.

This is also a good point for people who use these systems because when they use it to generate art, let’s say, you know, you used MidJourney about a year and a half ago and it just couldn’t do hands correctly. It would put like thumbs where fingers are and it would put like too many fingers and like necks would be like at angles that usually would break your bones. People can see these errors it was you know it was everybody could do that that’s just the thing that humans can do so people were aware that they need to correct these errors when they create like let’s say drawings or like photos photorealistic images with LLMs, but at the moment where they’re like doing something where they can’t, let’s say like this example, where you ask physics questions and it gives you answers and you can’t verify the answer because you don’t know. And it’s outside of the, okay, I know how many fingers are on the human hand. Why?

And I never understood this either. Why do people know understand that they need to correct it like with images or something like this or something they understand but then when it comes to something they don’t understand they just accept the answer and they know of that’s what she also says it’s like I … I’m also that’s why I like this video so much I’m in the same mindset. I was also so incredulous like why do people … How do they … How do they in their mind combine these two realities that obviously when they use a system for something that they know about and it makes mistakes, they need to correct the mistakes, but then they use it for something they don’t know anything about, they go like, oh yeah, that must be correct. Like, how do you even get that idea?

And if you ever have a conversation with these people, it is so exhausting because they know that. They’re like, I don’t have any physics training. I just really like physics. I don’t know what I’m doing. I don’t know any math, but it makes sense to me. Read my LLM paper and it’s just … Look at this three page mess of sentences that solves gravity. It just solves gravity and it’s just like what do you mean it solves gravity? What do you think that means? It’s nonsense and it’s so nonsense that you can’t even explain why it’s nonsense. It’s like the not even wrong problem and it’s just you can’t be mad for people ignoring this when you spend zero time on it.

At least learn physics first. It’s Play-Doh. It’s not food. Do you know what I mean? Here’s an example of what I’m talking about from a Reddit thread: If a smart person like me is absolutely mind-blown by ChatGPT, why is my extended network, who are not exactly mentally challenged, not seeing it? I am borderline frustrated that I am basically alone in my feelings and amazement. The only place I found that it is really getting it is this subreddit.

Hmm, why are they not seeing it? Maybe because you’re wrong? Just an idea? Maybe because it’s bullshit?

He claims that ChatGPT is worsening the intellectual gap because really smart people like him with an IQ of 150 understand how useful it is. They understand these smart people are using it and their skills are just growing and growing and growing. They have the world at their fingertips and us dum-dums who don’t think it’s useful are falling further and further behind. Oh my goodness.

To be fair to this poster, though, this post is like two years old, and I feel like ChatGPT was a lot more impressive two years ago because people were falling for the, oh my god, just wait. If it’s this good now, just wait, and it’s like still the same.

No, it wasn’t. Okay, you might be falling for the … You know, the thing Silicon Valley pulls out every time. They did that with VR. They did it with blockchain. They did it with Alexa, like personal assistants. Oh, it’s just imagine. They did it with the iPhone. Just imagine how great this will be in like two years. Well, it’s still the same iPhone. It’s got like 10 cents the processing power, but I’m still using, like I got a supercomputer in my pocket and I’m still using, like I’m using it to make a shopping list or play dumb games or i don’t know track my calories like you know it’s a supercomputer. I just imagine what you could do well yeah it never does that that’s just what Silicon Valley does and I think it’s it’s a good strategy because people keep falling for it every damn time.

Because people were falling for the oh my god just wait if it’s this good now just wait and it’s like still the same it’s like it’s summarized a thing. Again. I think people today are more aware of its limitations so I’m sure this guy doesn’t feel this way today but what an embarrassing thing to say for the people that fall into this hole this chat box hole they are convinced that they are the ones that know what they’re doing. They are the only ones using this tool. They are so smart and they stop talking to people about it and they just focus on being stuck in this chat box hole. They are just isolated from the world.

They are inside their subreddits where they all have different theories of physics and they’re trying to get the LLMs to rank who has the best. They’re the future. Look at them pushing the boundaries of physics by having a text conversation with an LLM about physics they don’t understand.

Great question. Like who talks like that. It’s like no one has a conversation where every interjection is just like oh my gosh what a thoughtful response. It’s weird. Unless you know you’re talking to a chat box where they literally compliment your every single thought most people I think would be annoyed by this like they would recognize this from far away as like one of those tactics from like how to influence people books or whatever but this …

I’m sorry for the background noise, by the way. I boosted this audio from this video a bit, and obviously she has some background noise there. She could probably do with a limiter there, but yeah, it is what it is.

Most people I think would be annoyed by this like they would recognize this from far away as like one of those tactics from like how to influence people books or whatever but this very specific type of person who thinks they’re on to something takes this as evidence that they are in fact onto something. If you’re already convinced you’re doing vibe physics and then you prompt your chat box with like, hey, what if dark energy is caused by the expansion of dark matter? Is that crazy? And your chat box responds, of course that’s not crazy. That’s a really interesting idea. Should we try to develop that? And you’re like, yeah.

Oh my God. And then because you’re onto something, you ask your chat box to derive an equation that relates dark energy to dark matter and has an expansion term in it and it just spits something out and you’re like, wow, I don’t understand math, but it spits something out. I must be onto something. I did it. If you were talking to a physicist and you said, hey, what if dark energy is just caused by the expansion of dark matter? And the physicist would be like, wait, what? I don’t think you understand dark energy what do you mean expansion what what do you mean and they couldn’t answer because it’s nonsense it’s a nonsense thing to say and part of the problem is …

Also she … I think she’s using a Blue microphone … yeah… a the pet peeve of mine. Those are just bad. That’s not that’s not helping matters.

But it spit something out … I must be on to something. I did it. If you were talking to a physicist and you said, hey, what if dark energy is just caused by the expansion of dark matter? The physicist would be like, wait, what? I don’t think you understand dark energy. What do you mean expansion? What do you mean? And they couldn’t answer because it’s nonsense. It’s a nonsense thing to say. And the conversation would end. But since they’re isolated in their little chat box space where they have the tools of physics, they’re pulling the donkey to the edge of the vibes. They think they’re onto something even though it’s nonsense. A Chat GPT would never tell you you’re speaking nonsense. It would never say you’re being silly because you’re paying $28,000 a month and they want you to keep paying for that.

So I think Collier’s right here. I think this is becoming a very widespread problem. And I also think this is what happened with this patent. I think this guy did some vibe physics, some vibe patenting. And so this erstwhile state secret was probably just some guy doing vibe physics at home and the bureaucrats at the patent office didn’t spot it initially. And that’s just what happens. And I think this will probably happen more until people get wise to this kind of thing and maybe there will be systems, probably other AI systems that spot AI-written content. And I think this is likely because we’ve seen this in other places. We’ve seen respected scientific journals actually publish papers that were just AI. AI slop, basically just well disguised and then we’ve seen it crop up in other places where people weren’t aware so I don’t think you know I mean just especially this Bureau 99. I read a little bit about them and they’re kind of you know because they deal with state secrets they sit in like locked rooms with like closed windows and computers that are not connected on the to the internet so they’re probably not on the internet a lot. So maybe they’re not up on how to spot AI tools.

Anyway, I think that’s my theory of what happened there. Speaking of AI slop, this podcast uses no AI-generated content or any AI tools whatsoever as a matter of principle. If you want to know more about this, visit fab.industries and click on the owl. And of course, on that website, if you go to fab.industries/podcast, you will also find the show notes, which as usual, they’re all listed there and they include, as for this episode and all the others, they include all the sources I used. So you can read up and read up on it and make up your own mind.

Thanks to Michael Mullan-Jense, Fadi Mansour, and Evgeny Kutznetsov for subscribing to the podcast on Substack and supporting it financially. Additional thanks to Sir Galteran, who continues to provide financial backing via Fountain.fm. If you want to join these good people in making sure that I can keep making these episodes, head to fab.industries/podcast.

This page not only tells you how to get the show via various podcast apps, it also explains the Substack subscription. That way you’ll get an optional email when a new episode is released and you will be able to support me with a subscription of five euros a month or 50 euros a year plus tax or whatever your local equivalent currency is.

Thanks for listening to this episode of Punching Upwards. The theme music for the podcast is a track called Fight or Fall by Dev Lev which i’ve used under license. I will be back with more detailed coverage of interesting news stories that you won’t get from the corporate news media next Sunday.

Until then: Goodbye and Merry Christmas! This has been Punching Upwards, a podcast by FAB INDUSTRIES. New media, new rules.

Clickable transcript on Substack episode page

Thanks to Michael Mullan-Jensen, Fadi Mansour and Evgeny Kuznetsov for subscribing to the podcast on Substack and supporting it financially! Additional thanks to Sir Galteran who continues to provide financial backing via Fountain.fm!

  1. Plötzlich Staatsgeheimnis, ARD Tagesschau, 15 December 2025
  2. Privatmann meldet Patent an: Plötzlich Staatsgeheimnis, Reddit thread (r/de), 15 December 2025
  3. Das staatsgeheime Patent, Hadmut Danisch, 15 December 2025
  4. Vibe Physics, Angela Collier, 24 July 2025
  5. Billionaires Convince Themselves AI Chatbots Are Close to Making New Scientific Discoveries, Gizmodo, 15 July 2025

The theme music for the podcast is a track called Fight or Fall by Def Lev. Find out more about the show at fab.industries/podcast — new media, new rules!

– 30 –