Rendered at 18:54:57 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
unclejuan 20 hours ago [-]
I think this is the breaking point where replacing our code written in C for code written in memory safe languages is becoming urgent.
The vast majority of vulnerabilities found recently are directly related to being written in memory unsafe languages, it's very difficult to justify that a DNS/DHCP server can't be written in rust or go and without using unsafe (well, maybe a few unsafe calls are still needed, but these will be a very small amount)...
How many CVEs in coreutils over the years? The project has the advantage of being old enough for them to be fixed. Call me when the rust rewrite has been there that long and still has more CVEs than the GNU counterpart.
Maybe coreutils is so old that most security vulnerabilities was solved before CVE even existed. But I think this is also a good argument why we are replacing a solid piece of C code to Rust just because it is "memory safe" and then have lots of CVEs related to things like TOCTOUs (that Rust will not save you).
Orygin 6 hours ago [-]
I'm not against rewriting it in Rust because I believe it really may help in certain class of bugs, but indeed it should not be replacing the old version instantly for that reason. Both could co exist, even tho you still need some guinea pigs to test it out and find issues.
Other than security, Rust brings major improvement to the tooling and may help bring fresh members that wouldn't want to contribute to C code. I understand why some projects go that route
ajsnigrutin 4 hours ago [-]
> Other than security, Rust brings major improvement to the tooling and may help bring fresh members that wouldn't want to contribute to C code. I understand why some projects go that route
But it loses old members who don't program in rust, already know the projects, all the reasons of why "this thing" was done "that way". and introduces a new set of bugs, plus now you have two versions of the same thing to maintain.
bayindirh 8 hours ago [-]
People thinking that using a superior tool (on paper) enables them to automatically write better tools than the ones who are battle tested over the years baffles me to no end.
Yes, you can go further, possibly faster. OTOH, nothing replaces experience and in-depth knowledge. GNU Coreutils embodies that knowledge and experience. uutils has none, and just tries to distill it with tests against the GNU one.
...and they get 44 CVEs as a result in their first test.
hombre_fatal 7 hours ago [-]
There was an article posted to HN recently that enumerated bugs in the rust rewrite.
Iirc the bugs had to do with linux system details like fs toctou and other things you'd only find out about in production.
Ideally we'd have a better way of navigating platform idiosyncrasies or better system APIs, so that every project doesn't have to relearn them at runtime. But the rewrite isn't pure downside.
bayindirh 7 hours ago [-]
I'm personally not against Rust rewrites in principle. But doing them in this drive-by hostile manner, esp. with non-GNU licenses smells "hostile takeover" for me, and dismantling core free software utilities is not nice in general.
> Ideally we'd have a better way of navigating platform idiosyncrasies or better system APIs
I believe trying to make something idiot-proof just generates better idiots, so I prefer having thinner abstractions on the lower level for maintenance, simplicity and performance reasons. The real solution is better documentation, but who values good documentation?
Graybeards and their apprentices, mostly from my experience. I personally still live with reference docs rather than AI prompts, and it serves me well.
pdimitar 3 hours ago [-]
My read on those was basically that the classic filesystems are hopelessly broken and we need ACID guarantees in the next-gen filesystems, like 20 years ago.
Not saying all of them were about FS TOCTOU bugs but once I got to these, that was my takeaway.
Obviously just using Rust cannot fix _all_ bugs, and I reject any criticisms towards Rust rewrites that tear down this particular straw man (its goal being to make it impossible to argue against). That's toxic and I get surprised every time people on HN try to argue in that childish way.
But if we can remove all C memory safety foot guns then that by itself is worth a lot already.
Losing decades-old knowledge on how the dysfunctional lower-level systems work would be regrettable and even near-fatal for any such projects. That I'd agree with. But it also raises the question on whether those lower-level systems don't need a very hard long look and -- eventually -- a replacement.
overfeed 27 minutes ago [-]
I like rust as a language, but boy, the violent, zero-sum proselytising gets on my nerves. It's not enough for Rust to win, but C must be beaten to a pulp and its head mounted on a pike.
New projects wearing an another project's skin have always bothered me - regardless of language. Ubuntu did a similar thing way back with libav masquerading as ffmpeg.
pdimitar 20 minutes ago [-]
How dramatic. I'll ask you as well: any proof for those colorful pictures you're drawing? Or are the people advocating for Rust a convenient target to vent other, very likely completely unrelated, frustrations?
I'm very happy to work with multiple programming languages without getting religious about any of them. They all have drawbacks, Rust included of course.
However, just my mere skepticism about the existence of the "violent proselytizing for Rust" of course immediately had me put in some imaginary group of fanatics. Which is of course normal. People love their binary camps and nuance and discission about merits be damned.
bayindirh 13 minutes ago [-]
As another data point, I have gone through enough flame wars, incl. the usual ones, and Rust.
There's certainly a fanatic group of Rust developers who really want to eradicate C and C++ from the people's knowledge and all codebases in this universe, so far so openly hating the developers and designers of the said languages.
Same was (or still is) true for some LLVM/clang people w.r.t. GCC.
This is why I use neither.
I'm always happy to discuss PLT and merits of programming languages with neutral parties, even in lively fashion, but when open-mindedness gets thrown out of the window, I do leave the room.
These kinds of healthy discussions will benefit both parties. Hubris, ego, closed-mindedness and fanaticism won't.
Well, I don't see them in HN is what I am saying. Obviously not scanning 24/7 but every time I enter an HN thread where Rust is even loosely mentioned, I brace for the inevitable bullies imagining they are victims. And this thread is exactly the same, sadly.
I am genuinely curious where this fanatic group is. Where are you witnessing them?
LtWorf 3 hours ago [-]
But removing all the memory footguns while introducing hundreds of syscalls footguns where rust won't help you at all might not be better at all,
pdimitar 2 hours ago [-]
I agree, absolutely. Hence my adjacent thought that maybe all this should just be thrown away and we should invent an FS with ACID semantics.
I'm all for gradual improvements but at one point and on we should zoom even further out and pick our battles well.
overfeed 14 minutes ago [-]
> maybe all this should just be thrown away and we should invent an FS with ACID semantics.
You're describing WinFS, which looked into and ultimately abandoned Microsoft 20 years ago. I'm sure other groups have looked into this as well, but there's no such thing as free lunch.
> I'm all for gradual improvements but at one point and on we should zoom even further out and pick our battles well.
That sounds a lot like picking up more battles, yet we all still have 24 hours a day. Recursively trying to perfect lower layers will have you like Hal changing the lightbulb https://youtu.be/AbSehcT19u0
pdimitar 4 minutes ago [-]
Well, recursively trying to perfect lower layers is what I am advocating for us to not do.
As a guy who prefers to stop and think before coding, to me a lot of the older UNIX / GNU primitives seem broken (like the env vars process inheriting discussion that was here a while ago) and should be completely rethought. I also think people overreact and believe "everything will break". And we have libraries and runtimes that only implement small parts of libc and the deployed apps that use them are running mostly fine for years.
My broader point was: shall we not start breaking away from all this legacy? Must we always rely on corporations to lead the charge?
But yes, I do of course agree with the only 24h a day thing. And likely nobody would want to pay for such a trail-blazing work anyway. Sad world.
Yokohiii 9 hours ago [-]
The problem is the lack of talent that is willing to work on this, not the language.
AI Security researchers at least do something. If it was so easy to rewrite everything in rust, I don't know why the response to this incidents isn't a rock solid replacement in rust, the next day.
I tell you why that is. Working on these things doesn't give you stars on github.
bluedragon1221 8 hours ago [-]
That is a very pretentious opinion. Dnsmasq is a ubiquitous project, ~14 years old, and has maintainers that are very experienced in c and in the codebase. Telling them to rewrite in a language they are (maybe) unfamiliar with, even with the help of AI, will make these maintainers' experience worthless.
People seem to think that rewriting in rust just magically fixes all issues, but that's not how it works (See recent uutils CVEs). Rewrites tend to have more bugs because the code is new and hasn't been reviewed as much.
bigiain 7 hours ago [-]
I'm pretty sure we are getting close to the point where a few thousand bucks worth of tokens is enough for an agent coding session to reproduce a significant sized (but not linux kernel sized) C codebase in Rust that's 100% security bug for security bug compatible with the original. And _maybe_ "given enough eyeballs, all bugs are shallow" was true or even close top true once. But non of the "new code" ever has a _single_ eyeball cast over it. You know how sometimes you can stare into the code you wrote for weeks, but as soon as somebody else sees it they go "Hmmm, that bit looks odd. Are you sure it's right?" For most vibe coders or agents coders, it's all the same tool that generated the code that's looking for the bugs - it seems reasonable to assume that if a particular LLM generated the buggy code in the first place, it's at least as unlikely to find the bugs as a human who write buggy code?
swiftcoder 6 hours ago [-]
> I'm pretty sure we are getting close to the point where a few thousand bucks worth of tokens is enough for an agent coding session to reproduce a significant sized (but not linux kernel sized) C codebase in Rust
Given a comprehensive test suite for the original, probably, yes. if the test suite isn't great, you are still going to spend a lot of time/tokens chasing edge cases.
> that's 100% security bug for security bug compatible with the original
You can do this part without AI. c2rust will give you a translation that retains all the security bugs (and all the memory unsafety). The hope is that the AI in the loop will let you convert it to idiomatic rust (and hence avoid the memory unsafely, and in doing so, also resolve some of the security issues).
Yokohiii 7 hours ago [-]
I think I was ambiguous.
> If it was so easy to rewrite everything in rust, I don't know why the response to this incidents isn't a rock solid replacement in rust, the next day.
Meaning that AI/Rust enthusiasts are supposed to supply solutions. Of course they won't.
pdimitar 3 hours ago [-]
> People seem to think that rewriting in rust just magically fixes all issues
Citations and links, please.
Yokohiii 2 hours ago [-]
I am not a journalist, nor your nanny.
pdimitar 2 hours ago [-]
Then you're claiming falsehoods supporting your prejudices. Good to know.
Though I wonder why.
LtWorf 3 hours ago [-]
> I don't know why the response to this incidents isn't a rock solid replacement in rust, the next day.
Go ahead and ask your AI to make it. What's stopping you?
krferriter 45 minutes ago [-]
> What's stopping you?
Based on their comment I guess they are worried they won't earn enough stars on github
user3939382 7 hours ago [-]
Maybe the problem is the way we think of dynamic memory. “Oh I don’t know what my maximum size for this is going to be, everything has to be dynamic” Is that really true? Is it really the end of the world for programs to declare maximum acceptable sizes for their inputs, and after that error out or use a ring buffer? If sizes were known you could design around that when using them. Your ram bank is finite, why is every layer inside of it then designed to pretend to be infinite? The rust thing strikes me as a massive waste of time and doesn’t solve the fundamental problem of modeling our programs correctly for reality which is finite system resources, and not just memory. c.f. Chrome loading 4 GB models onto people’s machines.
ok123456 2 hours ago [-]
This is exactly how people thought before 1995. Then everyone started "smashing the stack for fun and profit." In the end, you're trading one set of bugs (dynamic memory bugs and hard to reliably exploit) for another (overflow and easy to reliably exploit).
7 hours ago [-]
x3n0ph3n3 19 hours ago [-]
I disagree -- we're clearly getting better safeguards by way of AI agents to spot potential vulnerabilities!
jabl 13 hours ago [-]
The question is whether the current situation is a short burst of action, and once those most critical bugs get fixed the hype around AI vulnerability scanning will die down, or whether the current crop of system/infra software written in vulnerable languages like C are beyond redemption and they will provide an endless source of critical bugs for AI to find until we fix them by rewriting them in Rust/Go/whatever.
yardstick 12 hours ago [-]
An eternal summer of CVEs is upon us
KronisLV 11 hours ago [-]
Seems like those “rewrite in Rust” folks had a point after all (the viability of it for any number of projects being another thing entirely).
Terr_ 12 hours ago [-]
A better use of LLMs: To help translate the vast majority of C/C++ developers' output into memory-safe languages. :p
lionkor 9 hours ago [-]
You're likely joking, but in case someone else misunderstands; this is not going to work. Rust with unsafe{} is the only thing you can translate directly to, even with LLMs. Rust with extensive unsafe{} is not something anyone wants to debug or maintain, and is near impossible to improve quickly.
sieabahlpark 2 hours ago [-]
[dead]
nullsanity 14 hours ago [-]
[dead]
washingupliquid 24 hours ago [-]
It's a good thing this software isn't used in millions of devices which almost never receive updates.
tuetuopay 6 hours ago [-]
Well, it is a good thing to get control of your own hardware, when the vendor decides that no you won't do what you want with it.
amiga386 24 hours ago [-]
It's more of a good thing that, in most cases, it's on devices that won't send it any packets unless a client first authenticates to a Wi-Fi station or physically plugs into an Ethernet port.
leptons 19 hours ago [-]
Y2K26?
BLKNSLVR 13 hours ago [-]
When the contraction became longer than the standard notation.
JamesSwift 5 hours ago [-]
Its lame now, just season passes and loot boxes
romaniitedomum 1 days ago [-]
To quote a famous (in certain circles) bowl of petunias, "oh no, not again!"
BLKNSLVR 13 hours ago [-]
For a number of reasons, I feel that the only way we got here was via some kind of infinite improbability drive.
(mostly unrelated to topic at hand though)
romaniitedomum 12 hours ago [-]
> For a number of reasons, I feel that the only way we got here was via some kind of infinite improbability drive.
Oh very much so! In my mind, it seems that someone must have figured out what the universe was for, and now it's been replaced with something even more bizarre and inexplicable.
antod 24 hours ago [-]
Are you saying this is Arthur Dent's fault? (again)
For folks with more experience in this specific domain, dumb question: why is more software in this space not written in e.g. Erlang or some other garbage collected, concurrent language runtime?
jerf 2 hours ago [-]
The initial release of dnsmasq was in 2001. The list of viable languages for a high-performance network server at the time was still not all that long. Erlang wasn't on it. Too big a performance hit, too much opaque runtime that may not have been stable at the time, too few contributors, big dependency footprint of stuff most things wouldn't have installed. (When I used Erlang for a production system in more like the 2015 time frame it still had rough corners if you weren't using it exactly for the use case it was meant for.) This isn't specially a criticism of Erlang, it would have been like this across many languages and runtimes.
A lot of these systems that are getting hit, and will probably continue to be hit over the next few weeks or months, have a similar story. The Linux kernel's only other potentially viable choice was C++ at the time. OpenSSL, a perennial security offender, was started in 1998. You can look up your own favorite major system library with major security issues and it's probably the same story.
I'm as aggressive as anyone about saying "don't write a new project in C for network access", but cast me back to 1998 and I couldn't tell you what other viable choices there are either. There are safer languages, but they were much, much smaller than the C community, and I couldn't promise you how stable they were either. Java was out, and I don't know when to draw the exact line as to when it became a serious contender for a network server, but late 200Xs would be my guess; certainly what I saw in 1999 wasn't yet.
Example: I ran a Haskell network server in 2011 for something relatively unimportant and it fell over under conditions that would not have been very extreme for a production network; I know it was Haskell and not my code because I reused the same code base in 2013 with no changes in the core run loop and it did about 90% better; still not enough that I would have put that system into a real production use case but enough to show it wasn't my code failing. So while Haskell may have existed in the 200Xs, it wouldn't have qualified as a viable choice for a network server at the time.
There's a lot more viable choices today than there used to be.
asa400 1 hours ago [-]
Great context, thanks. I wasn't in the industry then so this is interesting to hear how decisions were being made at the time.
ok123456 2 hours ago [-]
Ocaml was fine in 2001.
LtWorf 3 hours ago [-]
In C you can normally directly map struct to network packets so that's quite easy. In other languages it's not often as simple.
Plus of course they are slower and bigger.
strenholme 23 hours ago [-]
Shameless plug time:
My own MaraDNS has been extensively audited now that we’re in the age of AI-assisted security audits.
Not one single serious security bug has been found since 2023. [1]
The only bugs auditers have been finding are things like “Deadwood, when fully recursive, will take longer than usual to release resources when getting this unusual packet” [2] or “This side utility included with MaraDNS, which hasn’t been able to be compiled since 2022, has a buffer overflow, but only if one’s $HOME is over 50 characters in length” [3]
I’m actually really pleased just how secure MaraDNS is now that it’s getting real in depth security audits.
Well, as you bundle Lua 5.1 (as Lunacy), instead of making a library and loading it, and you bundled the 2012 version, you're probably affected by CVE-2014-5461 and others. Lua hasn't been security fix free.
Now, I should probably explain why I’m using Lua 5.1 instead of the latest “official” version of Lua. Lua has an interesting history; in particular Lua 5.1 is the most popular version and the version which is most commonly used or forked against. Adobe Illustrator uses Lua 5.1, and Roblox uses a fork of Lua 5.1 called “luau”. LuaJIT is based on Lua 5.1, and other independent implementations of Lua (Moonsharp, etc.) are based on versions mostly compatible with Lua 5.1.
Lua 5.1 has a remarkably good security history, and of course I take responsibility for any security bugs in the Lua 5.1 codebase since I use the code with the relatively new coLunacyDNS server (Lua 5.1 isn’t used with the MaraDNS or Deadwood servers).
Lua 5.1 is used to convert documentation, but those scripts are run offline and the converted documents are part of the MaraDNS Git tree.
shakna 17 hours ago [-]
Yeah, I've had patches submitted to Moonscript, Fengari, and luau. Don't need to sell on why 5.1 is useful. Each version is a new lang, not just a few fixes or niceties.
I'm not convinced that vendoring, instead of embedding, is the right way.
The patch landing in 2021, instead of 2014, being one of those concerns.
(And you might want to recheck your assumption of how big 'int' will be, for rg32. C defines it in terms of minimum size, not direct size. int16_t isn't necessarily an alias.)
strenholme 12 hours ago [-]
>>>The patch landing in 2021, instead of 2014, being one of those concerns.<<<
What makes you think I was using Lua in 2014? Seriously, do you even know how to use “git log”?
And, yes, this can be easily checked with a tiny C program:
#include <stdint.h>
#include <stdio.h>
int main() {
uint32_t foo = 0xfffffffd;
uint64_t bar = 0xfffffffd;
uint32_t a = 0;
for(a=0;a<20;a++) { printf("%16llx:%16llx\n",foo++,bar++); }
return 0;
}
If there’s a system where uint32_t is 64 bits, that’s a bug with the compiler (which isn’t following the spec), not MaraDNS.
Are you going to make any other negative false implications about MaraDNS? Because you’re making a lot of very negative accusations without bothering to check first.
Edit: Here’s a version of the above C program which works in
tcc 0.9.25:
#include <stdint.h>
#include <stdio.h>
void shownum(uint64_t in) {
int32_t a;
for(a=60;a>=0;a-=4) {
int n = (in >> a) & 0xf;
if(n < 10) {printf("%c",'0'+n);}
else {printf("%c",'a'+(n-10)); }
}
return;
}
int main() {
uint32_t foo = 0xfffffffd;
uint64_t bar = 0xfffffffd;
uint32_t a = 0;
for(a=0;a<20;a++) {
shownum(foo++);
printf(":");
shownum(bar++);
puts(""); }
return 0;
}
shakna 12 hours ago [-]
> What makes you think I was using Lua in 2014? Seriously, do you even know how to use “git log”?
... It was fixed, upstream, in 2014. Thanks for not checking the number at the start of the CVE, before launching straight into attack mode.
Which is the point. In 2020, when you added Lua, you added a vulnerability that had officially been fixed for six years. Because you vendored, and did not depend on any system package.
Apologies for being confrontational; accusations of there being security holes are serious accusations in my book, and need to be backed up with solid facts. Yes, that’s how seriously I take security with the software I make available on the Internet.
That number is a 32-bit number in the C code, but it’s converted in to a 16-bit number. I used “int” to have it interface with other Lua code, but safely assume “int” can fit 16 bits, and yes I do convert the number to a 16-bit one before passing it off to other Lua code:
Vendoring Lua 5.1 was forced; since I wanted to use Lua 5.1 (for reasons described above, e.g. LuaJIT compatibility), I had to use code which hasn’t been updated upstream since 2012.
asddubs 45 minutes ago [-]
Why is Lua 5.1 the most popular version?
theamk 20 hours ago [-]
Unless the service accepts Lua code from the internet (and that would be a completely insane thing), the CVE-2014-5461 will not apply. And while I have not reviewed every Lua CVE, I bet most (all?) of then require a specifically crafted code, or at least highly-complex user input (such as arbitrary json)
It's important to look at the actual vulnerability at the context, and not just list any CVE which matches by version.
strenholme 18 hours ago [-]
I should explain how MaraDNS uses Lua 5.1 (actually, Lunacy, my own fork with security bugs fixed as well as security hardening—including, yes, a patch against CVE-2014-5461), so you can get an idea of its attack surface.
MaraDNS has three components:
• MaraDNS, the authoritative server, which goes back all the way to 2001
• Deadwood, the recursive server, which was started back in 2007
• coLunacyDNS, which allows a DNS server to use Lua scripting; this didn’t exist until the COVID pandemic
Neither MaraDNS nor Deadwood use Lunacy (except as a scripting engine for converting documents); only coLunacyDNS uses Lunacy. coLunacyDNS uses a sandboxed and security hardened version of Lunacy (and, yes, I would accept bugs where someone could escape that sandbox), and the Lua scripts which coLunacyDNS uses can only be controlled by a local user and there is no capability to run Lua scripts remotely.
koolba 18 hours ago [-]
> coLunacyDNS, which allows a DNS server to use Lua scripting; this didn’t exist until the COVID pandemic
Why would a DNS server use Lua scripting? Is this for dynamically responding to requests rather than doing a pure lookup?
strenholme 17 hours ago [-]
It’s useful for things like 10.1.2.3.ip4.internal style queries, or having a DNS server that always returns a given IP for any query given to it.
More discussion is on the coLunacyDNS overview page:
Its important to maintain your dependencies, by say embedding Lua, rather than rebranding it and then claiming you have no security flaws.
If I can find a CVE that _may_ affect the stack in five minutes, what _actual_ problems lurk there?
You vendor Lua - thus, it _is_ your responsibility to review every Lua CVE. You've set yourself up as the maintainer by vendoring.
strenholme 18 hours ago [-]
You weren’t replying to me. The parent poster made a good point—a vulnerability in Lua doesn’t mean software running Lua can necessarily be exploited—but, more to the point, I do update Lunacy and make sure it’s secure, just as I still take responsibility for verified important security holes in MaraDNS.
> It's important to look at the actual vulnerability at the context, and not just list any CVE which matches by version.
Unfortunately, that's not enough. Even if the vulnerable parts of the code are not being built, heck even if they have been completely erased from the source code, the auditors will still insist that you're vulnerable and must immediately upgrade, or else they will give your software a failing grade.
ajross 19 hours ago [-]
That seems wildly naive in the post-XSS era. We've been here before, and that kind of analysis turns out to be wrong almost every time.
"Well, sure, this component is insecure but an attacker can't reach it" is like a 50%+ positive signal for an unexpected privilege elevation bug.
gcr 20 hours ago [-]
MaraDNS is much less popular than dnsmasq though.
I have several libraries that I've written. Not one single serious security bug in them has been found since 1991. Granted, nobody uses my libraries...
Not to diminish your team's achievement! :D But it's important to contextualize claims like this with information about what your userbase looks like
strenholme 17 hours ago [-]
A lot of security and other audits have been performed against it though; MaraDNS, after all, is notable enough to have a Wikipedia page and hundreds of GitHUB stars.
For example, when the Ghost Domain Name DNS vulnerability was discussed, MaraDNS was audited and named (MaraDNS was immune to the security bug, for the record)
I don't think that's relevant. You can still find security issues in software nobody uses.
The question is a matter of impact because of how used the software is.
VorpalWay 19 hours ago [-]
Way fewer people are going to look at obscure things, so a lower percentage of issues will likely have been found. There is less fame and fotune in spending security research time on obscure software. Most small libraries won't be covered by any bug bounty programs either for example.
andrewjf 19 hours ago [-]
You don't need other people anymore to find security issues, you can do it yourself with AI.
rhdnfjtkfmf 18 hours ago [-]
Even accepting the premise, is it not immediately obvious to you that folks will be spending more money and effort aiming AI at higher-impact targets? This isn’t all-or-nothing.
cwillu 21 hours ago [-]
I remember being delighted finding maradns as an alternative to the “do everything” of dnsmasq way back when I set up a dns server, and more importantly, I haven't had to think about it since then.
ExoticPearTree 12 hours ago [-]
> Shameless plug time: My own MaraDNS has been extensively audited now that we’re in the age of AI-assisted security audits.
Out of curiosty: what is the point you’re trying to make? That there are alternatives to dnsmasq? That somehow your software is “better”?
This plug provides zero value to the dnsmasq discussion.
As others have pointed out: the more used a software is, the more scrutiny it gets and more bugs or edge cases are found.
z3ratul163071 14 hours ago [-]
good job. but it is amazing we are still writing core networking tools in vulnerable language such as c in 2026.
strenholme 12 hours ago [-]
Agreed, it made a lot more sense to write MaraDNS in C in 2001 though.
The main advantage of writing in C over Rust here in 2026 is that C has two different Lua interpreters, and there isn’t a port of Lua to Rust yet; [1] yes, there are ways to use the C version of Lua in Rust, but that’s different.
If I were to write a new server today, I could very well write it in Go, then use GopherLua for the Lua engine:
[1] If I were to use Rust, I would consider using Rune as an embedded language as per https://rune-rs.github.io/
kortilla 11 hours ago [-]
Flagged because this discussion about dnsmasq and another dns resolver implementation that has relatively no rollout worldwide by comparison is pointless.
binaryturtle 22 hours ago [-]
That's a bit shameless, indeed.
dnsmasq has served me well for like an eternity in multiple setups for different use cases. As all software it has bugs. And once located those get fixed. Its author is also easy to communicate with.
Why should I switch over to something way less proven? I'm quite sure your software also has bugs, many still not located. Maybe because it's less popular/ less well known nobody cares to hunt for those bugs? Which means even if the numbers of found bugs is less in your software at the moment, and it may look more audited for this reason, it may actually be way less secure.
rgkpz 22 hours ago [-]
"All software has bugs" is the most meaningless statement ever. It is just used for bonding with fellow bug writers who sit at a virtual campfire and muse about inevitabilities.
Demonstrably some software has fewer bugs, and its authors are often hated, especially if they are a lone author like Bernstein. Because it must not happen!
Projects with useless churn and many bug reports are more popular because only activity matters, not quality.
dc396 19 hours ago [-]
If DJB is "hated", it isn't because he's a lone author (Linus Torvalds was once a lone author and I don't think he was hated). It's because he can be an asshole. To quote George Bernard Shaw, “The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”
strenholme 17 hours ago [-]
DJB is a lot of things, and I have great respect for him, even though I feel he didn’t responsibly maintain Qmail/DJBdns/Publicfile. He made MaraDNS more secure because I carefully read his documentation—I got the idea to have a random source port to give MaraDNS more security from him, which means MaraDNS was unscathed when DNS spoofing was independently discovered in 2007.
The point DJB made was this: It was possible for a skilled C programmer to make a server with few security holes. Even though that’s not as relevant now, with Rust having most of the speed of C and security built in, it did make the Internet a safer place for many years. I remember using Qmail and DJBdns to make the servers at the small company I worked for at the time more secure.
shermantanktop 19 hours ago [-]
“Fellow bug writers” is everyone. People who write fewer bugs exist, and a lone few who write many fewer.
I haven’t noticed antipathy, but I have noticed skepticism. I assume people with outlier records in any field get some extra inspection.
If it becomes jealousy-fueled not-picking, those people are insecure jerks. But unusual track records are worth understanding.
zx8080 19 hours ago [-]
> "All software has bugs" is the most meaningless statement ever.
It's not! It's the foundation of all dev AI products marketing.
zamadatix 22 hours ago [-]
"All software has bugs" so "be wary of the one trying to say they haven't had any in 3 years" not so "I guess all are equal". For extremely low security bug rates either the scope is extremely narrow, the claim is dubious, or the project is a massive effort which the community talks about directly in posts rather than plugs (e.g. curl).
strenholme 21 hours ago [-]
DJB, with Qmail and DjbDNS (as well as Publicfile, which didn’t catch on in an era of CGI scripts), showed that one could have (mostly) security bug free software without the scope being “extremely narrow”, and without the claim being “dubious”.
It’s not normal for software to be so poorly written, one doubts the claim that a security bug hasn’t been found in over three years. If one thinks the claim of no security bugs of consequence in three years is dubious, feel free to do a security audit of MaraDNS (or DjbDNS, which I also will take responsibility for even though my software is, if you will, a “competitor” to DjbDNS), and report any bugs you find.
Speaking of DJB, DjbDNS has had a few security bugs over the years (but not that many), but I’m maintaining a fork of DjbDNS with all of the security bugs I know about fixed:
I am saying all this as someone who has had significant enough issues with DJB’s software, I ended up writing my own DNS server so I didn’t have to use his server (I might not had done so if DjbDNS was public domain in 2001, but oh well).
(As a matter of etiquette, it’s a little rude to claim someone is saying something “dubious”, especially when the claim is backed up with solid evidence [multiple audits didn’t find anything of significance in the last year, as I documented above], unless you have solid evidence the claim is dubious, e.g. a significant security hole more recent than three years old)
3ASAF 21 hours ago [-]
People here don't know that MaraDNS was already popular on extremely critical security mailing lists that basically hated anything but qmail and postfix. If you introduce more bugs and blog about them, it will probably gain in popularity. :)
fc417fc802 21 hours ago [-]
> It’s not normal for software to be so poorly written, one doubts the claim that a security bug hasn’t been found in over three years.
Can you back that claim up with at least some sort of theory? Because it doesn't match my perception of the real world, nor does it match my mental model of how CVEs happen.
Is that not begging the question? You have asserted X and now you point to a particular track record to back the claim of X up but the track record only serves as valid evidence of X if we already accept your assertion that X is the case.
zamadatix 15 hours ago [-]
I never used Qmail, so I won't comment on it, but I will say I absolutely consider djbdns narrow in scope as well (before accounting the Unix approach, utilized perhaps even more than in MaraDNS, to break that already narrowed scope down into even more focused binaries).
I had believed (and continue to hold) DNS software containing, e.g., an authoritative DNS server which lacks native TCP or DNSSEC support falls squarely into the "narrowly scoped" bucket and would appreciate if you'd not try to decide my opinion for me on any given project in the future.
strenholme 12 hours ago [-]
The point of djbdns and qmail was this: It allowed administrators to run a local DNS server securely without needing to constantly patch the code. They were limited in scope, but were perfect for admins who valued security over features.
In an era when DNS was otherwise a monoculture, djbdns was a welcome breath of fresh air.
Agreed, and that was a good use case + timing (at least for me a ways back :D). I.e. djbdns being narrow in scope isn't necessarily supposed to be a bad decision, it just doesn't serve as a counterexample to the narrow scope option as it was introduced to be.
vasco 16 hours ago [-]
> Demonstrably some software has fewer bugs
You literally write fewer instead of none, therefore agreeing with the sentence you claimed to say is meaningless.
daneel_w 22 hours ago [-]
> Why should I switch over to something way less proven?
Must they prove their software to you? They're offering an alternative, not bargaining for a deal.
fc417fc802 21 hours ago [-]
When you offer up an alternative as technically superior in some manner then yes, it is on you to demonstrate such a claim in a convincing manner. "No bugs in 3 years in this software with a much smaller audience and also look AI audits!" comes across as off topic shameless self promotion. At least if an insightful technical discussion ensued the subthread might prove worthwhile but so far it's just the usual tired shit flinging.
strenholme 21 hours ago [-]
I have far more evidence of a very good security record with MaraDNS than “No bugs in 3 years in this software with a much smaller audience and also look AI audits!”
• The software has been around for 25 years
• The software is popular enough to have been subjected to dozens of security code audits, including two audits in the post-AI era
• In those 25 years, only two remote “packet of death” bugs have been found
• Also, in those same 25 years, only one single bug report of remotely exploitable memory leaks has been found
This isn’t something which, as implied here, has a lot of security bugs only because no one has used or audited the software. This is a long term, mature code base which has only had a few serious security bugs in that timeframe.
If this evidence isn’t “convincing” to you, I don’t know what evidence would be “convincing”.
fc417fc802 20 hours ago [-]
For what it's worth I didn't know about maradns prior to this. Maybe it actually sees fairly wide use? Whether or not I accept your evidence would hinge on that. Regardless I think my point stands - if you don't lead with a convincing line of reasoning all that's left is an empty assertion. Unless I happen to recognize you as an authority in the field that's not going to do anything for me since by default you're some stranger on the internet that might be a dog for all I know.
To illustrate the issue with an extreme example, consider that a disused repository on github full of security holes is highly unlikely to have any CVEs regardless of age. The software has to present a worthwhile target (ie have a substantial long term userbase) before anyone will bother to look for exploits. (I guess that might change in the near future thanks to AI but I don't think we're there just yet.)
strenholme 11 hours ago [-]
“The software has to present a worthwhile target (ie have a substantial long term userbase) before anyone will bother to look for exploits”
MaraDNS is a worthwhile target; two people have been auditing it this year, in fact:
> dnsmasq has served me well for like an eternity in multiple setups for different use cases. As all software it has bugs. And once located those get fixed. Its author is also easy to communicate with.
I concur. The last part, however, is quite worrisome. Dnsmasq is ran by one person, published on their own git and I did not see any information about other maintainers.
It is a super important (and great, and useful, and everything) software and i have fears of what will happen one day.
Sure, someone can clone and push to github but it may seriously fragment the ecosystem.
binaryturtle 6 hours ago [-]
Surprisingly a lot of popular projects are mainly one-person projects.
In my experience projects lead by large corporates burnt me a lot more in the past and caused more serious friction in my setups (e.g. breaking backwards compatibility for the sake of killing 5 lines of code that could cause some extra "development costs".)
Anyway… that's not saying one is better than the other. Trust into a project builds different over time (unrelated to the size of the development team).
---
Seeing it here, how someone "shamelessly" (in their own words) adverts their own competing project and then uses dummy accounts to bend the voting and discussion in their favouring… that's definitely NOT how trust is build up. It's something which instantly makes me stay away from a project (better or not).
strenholme 2 hours ago [-]
>>>uses dummy accounts to bend the voting and discussion<<<
This is a false accusation with no evidence to back it up. Let me state this clearly: I am not using sockpuppet accounts nor am I stacking the vote.
Ycombinator is a secure site and @dang does not allow sockpuppets nor stacked voting.
What you are seeing is the hacker spirit of the Ycombinator community: Hackers believe in software diversity, and strongly oppose monoculture, so welcome people who bring up and discuss alternative software.
BrandoElFollito 5 hours ago [-]
My point was not in the one-person aspect (I use fantastic software done by one person, I also develop some (used by a niche)), it is the bus factor that is worrisome.
Some projects die because the dev abandons them (slowly or abruptly). Usually you see this happening with time and have the time to turn around.
The bus factor is drastic. One day the project lives and the next day it is gone. There is nobody anymore to push PRs etc. As I said, you can have it picked up via a fork and hope for the best (= that current users will somehow know). havng a backup contibutor eevn just to make the transition is a nice thing to have.
> Seeing it here, how someone "shamelessly" (in their own words) adverts their own competing project and then uses dummy accounts to bend the voting and discussion in their favouring… that's definitely NOT how trust is build up. It's something which instantly makes me stay away from a project (better or not).
Not sure how this relates to my comment?
washingupliquid 24 hours ago [-]
Maybe this is the kick in the ass Debian needs to upgrade the embarrassingly ancient dnsmasq in "stable" because while I can't think of any new features, the latest versions contain many non-CVE bug fixes.
But I doubt it, they will lazily backport these patches to create some frankenstein one-off version and be done with it.
Before anyone says "tHaT's wHaT sTaBlE iS fOr": they have literally shipped straight-up broken packages before, because fixing it would somehow make it not "stable". They would rather ship useless, broken code than something too new. It's crazy.
zrm 23 hours ago [-]
They're not going to put a newer version in stable. The way stable gets newer versions of things is that you get the newer version into testing and then every two years testing becomes stable and stable becomes oldstable, at which point the newer version from testing becomes the version in stable.
The thing to complain about is if the version in testing is ancient.
wolttam 23 hours ago [-]
Looks like the version in stable is 2.91, which was released within a couple months of trixie. It's not 'ancient' by any stretch.
Yeah was about to comment, parent says "if it is ancient", it is not. So the root comment is nothing burger. Stable has 1 release cycle old, and depending on how things play out, testing may have 2.93 or later anyways.
PunchyHamster 9 hours ago [-]
2.92 currently
koverstreet 23 hours ago [-]
No, that's exactly the thing to complain about.
That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time. Not to mention, you think the C of today is bad? Have you looked at old C?
And the disadvantage is that backporting is manual, resource intensive, and prone to error - and the projects that are the most heavily invested in that model are also the projects that are investing the least in writing tests and automated test infrastructure - because engineering time is a finite resource.
On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport. And then things just get worse and worse.
We'd all be a lot better off if certain projects took some of the enthusiasm with which they throw outrageous engineering time at backports, and spent at least some of that on automated testing and converting to Rust.
zrm 23 hours ago [-]
> That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time.
That's not what it's about.
What it's about is, newer versions change things. A newer version of OpenSSH disables GSSAPI by default when an older version had it enabled. You don't want that as an automatic update because it will break in production for anyone who is actually using it. So instead the change goes into the testing release and the user discovers that in their test environment before rolling out the new release into production.
> On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport.
They're not alternatives to each other. The stable release gets the backported patch, the next release gets the refactor.
But that's also why you want the stable release. The refactor is a larger change, so if it breaks something you want to find it in test rather than production.
koverstreet 23 hours ago [-]
You're going to have to update production at some point, and delaying it to once every 2 years is just deferred maintenance. And you know what they say about that...
So when you do update and get that GSSAPI change, it comes with two years worth of other updates - and tracking that down mixed in with everything else is going to be all kinds of fun.
And if you're two years out of the loop and it turns out upstream broke something fundamental, and you're just now finding out about it while they've moved on and maybe continued with a redesign, that's also going to be a fun conversation.
So if the backport model is expensive and error prone, and it exists to support something that maybe wasn't such a good idea in the first place... well, you may want something, but that doesn't make it smart.
throw0101c 18 hours ago [-]
> You're going to have to update production at some point, and delaying it to once every 2 years is just deferred maintenance. And you know what they say about that...
Updated what, specifically in production?
If you need a newer version of Python or Postgres or whatever it is possible to install it from third-party repos or compile from source yourself. But having a team of folks watch all the other code out there is a load off my plate: not worrying about libc, or OpenSSH, or OpenSSL, or zlib, or a thousand other dependencies. If I need the latest version for a particular service I would install that separately, but otherwise the whole point of a 'packagized' system is to let other folks worry about those things.
> So when you do update and get that GSSAPI change, it comes with two years worth of other updates - and tracking that down mixed in with everything else is going to be all kinds of fun.
I've done in-place upgrades of Debian from version 5 to 11 at my last job on many machines, never once re-installing from scratch, and they've all gone fine.
Further, when updates come down from the Debian repos I don't worry about applying them because I know there's not going to be weird changes in behaviour: I'm more confident in deploying things like security updates because the new .deb files have very focused changes.
zrm 22 hours ago [-]
There are two different kinds of updates.
One is security updates and bug fixes. These need to fix the problem with the smallest change to minimize the amount of possible breakage, because the code is already vulnerable/broken in production and needs to be updated right now. These are the updates stable gets.
The other is changes and additions. They're both more likely to break things and less important to move into production the same day they become public.
You don't have to wait until testing is released as stable to run it in your test environment. You can find out about the changes the next release will have immediately, in the test environment, and thereby have plenty of time to address any issues before those changes move into production.
washingupliquid 20 hours ago [-]
> One is security updates and bug fixes.
That's where you're wrong. They're not one and the same.
Debian stable often defers non-security bug fixes for up to two years by playing this game.
I'm not interested in new features unless they make things actually work.
Debian stable time and again favors broken over new. Broken kernels, broken packages. At least they're stable in their brokenness.
Hence my complaint.
PunchyHamster 9 hours ago [-]
Haven't noticed much broken.
But I have noticed far more broken in distro that DOES backport features, RHEL/Centos. So many that we migrated away from it, when they backported a driver bug into centos 5 and then did the same backport of a bug for centos 6.
Also rebuilding package is trivial if you don't agree with what should and should not go into stable version
koverstreet 22 hours ago [-]
You definitely need different channels for high priority fixes and normal releases, stable and testing releases and all that.
But two years is impractical and Debian gets a ton of friction over it. Web browsers and maybe one or two other packages are able to carve out exceptions, because those packages are big enough for the rules to bend and no one can argue with a straight face that Debian is going to somehow muster up the manpower to do backports right.
But for everyone else who has to deal with Debian shipping ancient dependencies or upstream package maintainers who are expected to deal with bug reports from ancient versions is expected to just suck it up, because no one else is big enough and organized enough to say "hey, it's 2026, we have better ways and this has gotten nutty".
Maybe the new influx of LLM discovered security vulnerabilities will start to change the conversation, I'm curious how it'll play out.
rlpb 22 hours ago [-]
> ...upstream package maintainers who are expected to deal with bug reports from ancient versions...
They are not expected to deal with this. This is the responsibility of the Debian package maintainer.
If you (as an upstream) licensed your software in a manner that allows Debian to do what it does, and they do this to serve their users who actually want that, you are wrong to then complain about it.
If you don't want this, don't license your software like that, and Debian and their users will use some other software instead.
koverstreet 21 hours ago [-]
If package maintainers were always fine upstanding package maintainers as you imagine them to be I wouldn't be complaining, but I have in fact had Debian ship my software and screw it up and gotten a flood of bug reports, so... :)
I think you need to chill out. Relicensing the way you suggest would be _quite_ the hostile act, and I'm not going to that either. But I am an engineer, so of course I'm going to talk about engineering best practices when it comes up.
You don't have to take it as an attack on your favorite distro - that really does pee in the pool of the upstream/downstream relationship between distros and their upstream.
fc417fc802 21 hours ago [-]
> I am an engineer, so of course I'm going to talk about engineering best practices when it comes up.
The trouble is you seem to be assuming that best practices for you, in your opinion, also apply to everyone else. They don't. Not everyone sees things the way you do or is facing the same issues or is making the same set of tradeoffs. There are downsides to what debian does but there are also upsides.
At this point, given the plethora of high quality options available as well as how easy it is to mix and match them on the same system thanks to container-related utilities and common practices I really don't think there's any room for someone who doesn't like the debian model (ie in general, as opposed to targeted objections) to complain about how they do things. If you want cutting edge userspace on debian stable at this point you have at least 3 options between nix, guix, and gentoo. There's also flatpak and snap which come built in.
koverstreet 20 hours ago [-]
We're in the middle of a huge spike in LLM discovered security vulnerabilities, which means not everything will get assigned a CVE, a lot of people are watching repositories to look for exploitable bugs, and in the frenzy of backporting that people are now having to do things will get missed.
I wager it's only a matter of time before we see a mass rooting event that hits Debian hard while everyone running something more modern has already been patched.
I think that might be what cuts down on the grandstanding about "freedoms" and "that's how we've always done things". You certainly are, up until it becomes a public nuisance.
fc417fc802 20 hours ago [-]
No one is grandstanding about freedom here though? I claimed that the approach debian takes has both upsides and downsides. I stand by that. Personally I pull my networked services from testing while running stable on the host. I absolutely do not want constant churn of the filesystem code or drivers on my devices but I would also prefer not to run some franken build of ssh or apache or what have you. However I can also sympathize with others who need a more structured process and substantial lead time in staging prior to making major changes to production.
Why would you expect LLMs not to be simultaneously leveraged to catch backports that were missed or inadvertently broken?
Given recent headlines I think it's far more likely that we see a mass rooting event hit one or more of the bleeding edge rolling release distros or language ecosystems due to supply chain compromise. Running slightly out of date software has never been more attractive.
washingupliquid 20 hours ago [-]
Have you ever considered leaving Linux drama and taking your talents to the BSD world?
OpenBSD in particular can use competent developers to fix their dogshit filesystem.
jabl 13 hours ago [-]
The inevitable drama between Kent and Theo would melt the internet, for sure. Bring the popcorn.
PunchyHamster 9 hours ago [-]
BSD devs have head too far up their arse to fix anything wrong with their distro
b112 21 hours ago [-]
Good grief, you are not forced to uae Debian! Please leave the only stable distro alone, and just use one more to your style.
I assure you, enormous sums of people prefer Debian the way it is. I do not, ever, want "new stuff" in stable. I have better things to do than fight daily change in a distro, it's beyond a waste of time and just silly.
If you want new things, leave stable alone, and just run Debian testing! It updates all the time, and is still more stable than most other distros.
Debian is the way it is on purpose, it is not a mistake, not left over reasoning, and nothing you said seems relevant in this regard.
For example, there is no better way than backporting, when it comes to maintaining compatibility. And that's what many people want.
PunchyHamster 9 hours ago [-]
Get thru the issues once every 2 years is entirely fine. Farther than that and you get problems. We do that for ~500 systems of very varied use. I wouldn't want to do it yearly (or dread on rolling release) but I also wouldn't want to do it any less often coz of issues you mentioned.
> And if you're two years out of the loop and it turns out upstream broke something fundamental, and you're just now finding out about it while they've moved on and maybe continued with a redesign, that's also going to be a fun conversation.
Having that sprung on you because you decided to run everything on latest is worse.
"Oh we have CVE, we now need to uproot everything because new version that fixes it also changed shit"
With release every year or two you can *plan* for it. You are not forced into it as with "rolling" releases because with rolling you NEED to take in new features together with bugfixes, but with Debian-like release cycle you can do it system by system when new version comes up and the "old" one still gets security fixes so you're not instantly screwed.
> So if the backport model is expensive and error prone, and it exists to support something that maybe wasn't such a good idea in the first place... well, you may want something, but that doesn't make it smart.
It exists in that format because people are running businesses bigger than "a man with a webpage deployed off master every few days"
dagenix 22 hours ago [-]
If you don't like the debian model, didn't use debian. There are people that like the debian model, it seems like you aren't one of them, though. That doesn't make them wrong.
toast0 21 hours ago [-]
> You're going to have to update production at some point, and delaying it to once every 2 years is just deferred maintenance. And you know what they say about that...
Doing terrible work every 2 years is better than doing it every day?
dwattttt 19 hours ago [-]
I've brought this up with leap second adjustments; a process you do once every two years is one you'll never get good at. If you want them to go smoothly, do them monthly.
LetsEncrypt has been a great example of this in certificate management.
cesarb 18 hours ago [-]
> Doing terrible work every 2 years is better than doing it every day?
And by skipping some releases, you will have less of that work. When something is changed in one release, then changed again on the next one, by waiting you only have to do the change once, instead of twice. And sometimes you don't even have to do anything, when something is introduced in one release and reverted in the next one.
vel0city 21 hours ago [-]
Personally I'd rather have a manageable stream of little bad things consistently over time rather than suddenly having a mountain of bad things one day.
PunchyHamster 9 hours ago [-]
Debian Testing works entirely fine for that use case. Each package gets ~2 weeks of shakeout in Unstable before it gets there so there is chance most of the teething issues with new version is handled already, and is more than most rolling distros do
toast0 21 hours ago [-]
That's a fine choice, but it doesn't fit with using packaged software from Debian stable.
cwillu 21 hours ago [-]
That's great; I prefer something different.
zie 22 hours ago [-]
Clearly you disagree with the debian stable perspective. That's fine, it's not for everyone. You can just run debian unstable or debian testing, depending on where exactly you draw the line.
If you want the rolling release like distro, just run debian unstable. That's what you get. It's on par with all the other constantly updated distros out there. Or just run one of those.
Also, Debian stable has a lifetime a lot longer than 2 years, see https://www.debian.org/releases/. Some of us need distros like stable, because we are in giant orgs that are overworked and have long release cycles. Our users want stuff to "just work" and stable promises if X worked at release, it will keep working until we stop support. You don't add new features to a stable release.
From a personal perspective: Debian Stable is for your grandparents or young children. You install Stable, turn on auto-update and every 5-ish years you spend a day upgrading them to the next stable release. Then you spend a week or two helping them through all the new changes and then you have minimal support calls from them for 5-ish years. If you handed them a rolling release or Debian unstable, you'd have constant support calls.
ryandrake 21 hours ago [-]
...or just leave grandparents on the previous version of Stable until they get a new computer. Honestly not a huge fan of upgrading software at all, if I'm the one supporting the machines.
zie 20 hours ago [-]
Just depends on if that's something grandparents/kids can/want to afford.
Personally, If the hardware is working great, seems like a waste of money replacing it, just to upgrade software. Especially with Debian oldstable -> Debian stable where it's usually quite easy and painless.
nsvd2 4 hours ago [-]
There are bleeding edge and rolling release distributions. Debian is simply not that and has no desire to be.
orf 21 hours ago [-]
> You don't want that as an automatic update because it will break in production for anyone who is actually using it
The problem with this take is that it’s stuck in the early 2000’s, where all servers are pets to be cared for and lovingly updated in place.
It’s also circular: you have the same problem with the current model if you don’t have a test environment. And if you do have a test environment, releases can be tested and validated at a much higher cadence.
washingupliquid 22 hours ago [-]
> What it's about is, newer versions change things. A newer version of OpenSSH disables GSSAPI by default when an older version had it enabled.
Debian patches defaults in OpenSSH code so it behaves differently than upstream.
They shouldn't legally be allowed to call it OpenSSH, let alone lecture people about it.
Let them call their fork DebSSH, like they have to do with "IceWeasel" and all the other nonsense they mire themselves into.
When you break software to the point you change how it behaves you shouldn't be allowed to use the same name.
b112 21 hours ago [-]
It's called open source. People are allowed to compile it as they wish. That's part of the positive, and doing so doesn't mean anything is broken.
jeroenhd 23 hours ago [-]
If you want that, you don't want Debian. Other people do.
Some people will even run Debian on the desktop. I would never, but some people get real upset when anything changes.
Debian does regularly bring newer versions of software: they release about every two years. If you want the latest and greatest Debian experience, upgrade Debian on week one.
From your description, you seem to want Arch but made by Debian?
jampekka 22 hours ago [-]
> From your description, you seem to want Arch but made by Debian?
Isn't that essentially Debian unstable (with potentially experimental enabled)? I've been running Debian unstable on my desktops for something like 20 years.
koverstreet 23 hours ago [-]
Well, my workstation runs Debian sid, and all the newer stuff runs NixOS...
But that does nothing for people who write and support code Debian wants to ship - packaging code badly can create a real mess for upstream.
kiney 12 hours ago [-]
I run Debian on desktop and laptops. Because I want stable versions with only security backports
PunchyHamster 9 hours ago [-]
Debian Testing works just fine on desktop and it is up to date enough to not really be an issue.
And despise the name is probably more stable than vast majority of rolling release distros
PunchyHamster 9 hours ago [-]
> That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time. Not to mention, you think the C of today is bad? Have you looked at old C
The automatically tested Debian release is called Debian Testing. And it is stable enough.
Debian Stable is basically "we target particular release with our dependencies instead of requiring customer to update entire system together with our software". That model works just fine as long as you don't go too far back.
> On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport. And then things just get worse and worse.
Narrator: It turned out things were not getting worse, they were just fine.
> We'd all be a lot better off if certain projects took some of the enthusiasm with which they throw outrageous engineering time at backports, and spent at least some of that on automated testing and converting to Rust.
That project is RedHat, not Debian, they backport entire features back to old versions (together with bugs!)
rlpb 22 hours ago [-]
Refactoring and rewrites prove time and time again that they also introduce new bugs and changes in behaviour that users of stable releases do not want.
For what you want, there are other distributions for that. Debian also has stable-backports that does what you want.
No need to rage on distributions that also provide exactly what their users want.
e12e 18 hours ago [-]
How do you do QA without locking a set of features?
19 hours ago [-]
bluGill 22 hours ago [-]
You have far too much faith in automated testing.
Don't get me wrong, I use and encourage extensive automated testing. However only extensive manual testing by people looking for things that are "weird" can really find all bugs. (though it remains to be seen what AI can do - I'm not holding my breath)
koverstreet 21 hours ago [-]
100% - but that's where writing regression tests when people find things really helps with the stress levels of future-you :)
fulafel 14 hours ago [-]
Close: New versions go in unstable where development happens, testing is where things go to marinate for a while.
ploxiln 20 hours ago [-]
You don't have to use Debian stable, if you'd prefer Ubuntu every 6 months, or Fedora (6 months? 9 months?), or even Arch Linux updated daily ...
I use Arch on my laptop, when I got it 2 years ago the amd gpu was a bit new so it was prudent to get the latest kernel, mesa, everything. Since I use it daily it's not bad to update weekly and keep on top of occasional config migrations.
I use Debian stable on my home server, it's been in-place upgraded 4-ish times over 10 years. I can install weekly updates without worrying about config updates and such. I set up most stuff I wanted many years ago, and haven't really wanted new features since, though I have installed tailscale and jellyfin from their separate debian package repos so they are very current. It does the same jobs I wanted it to do 8 years ago, with super low maintenance.
But if you don't want Debian stable, that's fine. Just let others enjoy it.
asveikau 21 hours ago [-]
You can always ask the Debian project for your money back.
20 hours ago [-]
lutoma 22 hours ago [-]
For what it's worth, Debian had a security update for dnsmasq yesterday, presumably to address this.
About a decade ago I switched to Ubuntu LTS because of Debian’s “policy?” of having pretty old packages in “stable” and a long release cycles.
Nowadays, even with Ubuntu’s two year or so release cycle I have to use 3rd party packages to have up to date software (PHP being one) and not some version from three years ago.
We no longer live in a world (with few exceptions) where running a 3-5 year old distribution (still supported) makes sense.
lmm 20 hours ago [-]
That's what stable is for though. Like, sure, stable's policy is ludicrous and you would have to be insane to run stable. But the remedy for that isn't to try to change Debian policy, it's to get people to stop running stable. Maybe once no-one uses it Debian will see sense.
afarviral 23 hours ago [-]
What if the new release which contains the fixes has new dependencies and those also have new dependencies? I assume they have to Frankenstein packages sometimes to maintain the borders of the target app while still having major vulns patched right in stable.
whatever you're on, stop, it's not making your brain any better
BrandoElFollito 7 hours ago [-]
It depends on how you look at it. I use Debian stable in the smallet possible configuration because it is, well, stable. A rock on which I put docker to run actually useful services, which are upaded the way I want.
If I was to run dnsmasq on Debian, it would be in a container. Since I run Pihole (in a container), it kinda is.
rlpb 22 hours ago [-]
> ...they have literally shipped straight-up broken packages before, because fixing it would somehow make it not "stable"
Irrelevant strawman, since you're not accusing the dnsmasq package in Debian stable of being straight-up broken.
SoftTalker 22 hours ago [-]
Never liked using dnsmasq. Always felt like too much in one tool. A local caching resolver, dhcp server, and tftp/pxe boot setup were always things I preferred to configure separately.
PunchyHamster 9 hours ago [-]
That's kinda the point. It is "i run a small router" app in a box.
DHCP and DNS are connected, PXE requires DHCP entries, so to do a simple setup you'd need to glue together at least 3 daemons otherwise, all with different config syntax
infinet 4 hours ago [-]
There are few dnsmasq (only?) features that are indispensable to some. Examples: sending query of *.example.com to certain upstream servers, or returning NXDOMAIN for phishing sites, or adding all resolved IPs for *.example.org to an ipset for policy routing. The last one works on FreeBSD as well although BSD does not have ipset. The list of *.example_xyz.com can be huge and it is said recent dnsmasq can handle them efficiently.
cwillu 21 hours ago [-]
That line of thinking is exactly why I ended up using maradns for my dns hosting way back.
10/10, no regrets, would recommend.
magicalhippo 18 hours ago [-]
What do you use for DHCP and how do you have DHCP update local DNS entries? Or do you just rely on mDNS to work?
cwillu 5 hours ago [-]
I use maradns to provide dns, not to resolve it. My vps does not require its own dhcp server.
SoftTalker 15 hours ago [-]
I use dhcpd. It doesn't update local DNS entries. I have no need for that.
koyote 19 hours ago [-]
I agree, it also goes against the Linux "way of doing things".
For example, Opnsense uses the dhcp portions of dnsmasq only (and unbound for the dns parts) which just feels 'wrong'.
gerdesj 19 hours ago [-]
When I first came across Linux you would download the code (very slowly) to /usr/src/linux (extract and cd) and run "make config". You'd answer quite a lot of y/n and later y/n/m questions and then copy a binary and later on run a script to put things in place. Then you would fix up lilo and off you trot ... or not 8)
Is that the Linux way you are on about? No obviously not 8)
I think you mean the "unix idealized but never really happened exactly but we are quite close if you squint a bit ... way" where each tool does one job well and the pipeline takes up the slack.
PunchyHamster 9 hours ago [-]
dhcpd is probably more quirky than dnsmasq, all software from ISC is kinda ass (also technically dhcpd is end of life)
Baltazhar 11 hours ago [-]
What is the nature of these findings? There’s a big difference between AI finding a buffer overflow vs. identifying a fundamental protocol flaw. Could AI realistically discover something like the Kaminsky attack? or even something which is an amplification exploit like the NXNSAttack?
rela-12w987 22 hours ago [-]
The AI bug report tsunami is not in all projects. As the top comment notes, MaraDNS didn't have any. I assume djbdns and tinydns didn't either, otherwise they'd shout it from the rooftops.
I never understood why some projects get extremely popular and others don't. I also suspect by now that the reports by tools that are "too dangerous to release" scan all projects but selectively only contact those with issues, so that they never have to admit that their tool didn't find anything.
philipwhiuk 21 hours ago [-]
> The AI bug report tsunami is not in all projects.
It's in popular projects.
3ASAF 21 hours ago [-]
No, postfix hasn't had a single valid bug found by AI. There are legions of other projects as well.
It is a distorted view, because projects become popular by allowing indiscriminate commits, bugs, maintainers.
If I'd start a new project I'd allow anyone in and blog about 100 exploits every year, because that is exactly what people want. I'm serious.
sailfast 15 hours ago [-]
"hopefully they will be releasing patched versions of their dnsmasq packages in a timely manner."
Hopefully!
PeterStuer 12 hours ago [-]
"The tsunami of AI-generated bug reports shows no signs of stopping, so
it is likely that this process will have to be repeated again soon."
But, ai-deniers are telling us there is nothing to see ...
thenickdude 15 hours ago [-]
LXD uses dnsmasq to provide DHCP and DNS for containers I think? Viable container escape?
1vuio0pswjnm7 17 hours ago [-]
I never liked dnsmasq or the Pi-Hole dderivation and do not use it but many people seem to love this software. I don't think there is any amount of CVEs that could convince people to stop using it
How bad is it if someone infects my home router using such a thing? They can MITM non-encrypted requests, but there are not a lot of those, right?
What else can they do, assuming the computers behind the router are all patched up.
zrm 23 hours ago [-]
They can block traffic to update servers so the computers behind the router aren't all patched up, then exploit them. They also get access to all the IoT devices on the internal network. They can also use your router as a proxy so their scraping/attack traffic comes from your IP address instead of theirs.
It's definitely bad.
PhilipRoman 23 hours ago [-]
If you blindly TOFU ssh sessions, those can be pwned easily in many common use cases. Legacy software configurations like NFS with IP authentication will be bypassed. Realistically the most likely scenario is using your home as a VPN, or a DDOS node.
raggi 21 hours ago [-]
yeah, and it's not like people recently launched a coffee shop that accepts payments over tofu ssh and a shell provider doing the same
Asmod4n 22 hours ago [-]
they could try and exploit any device on your network, and since they see which servers you connect to and how often you communicate with one they can write phishing mails which are tailored just for you.
nhattruongadm 23 hours ago [-]
[flagged]
xydac 24 hours ago [-]
some of these would have made to embedded hardwares, making updates more challenging if say you were to flash an update.
7 hours ago [-]
ck2 24 hours ago [-]
if machine-learning can find all these holes
why can't machine-learning write a product from scratch that is flawless?
No, a collection of fuzzers and the lean proof assistant found (almost) no bugs.
tclancy 23 hours ago [-]
Because the problem is asymmetric: the attacker only needs to find one hole at one time. The defender has to be flawless forever.
hnlmorg 23 hours ago [-]
It’s easier to break something than it is to make something that cannot be broken.
perlgeek 23 hours ago [-]
LLMs certainly make it more feasible to rewrite a product in a memory-safe language, eliminating a whole class of bugs.
Flawless software is hard for an LLM to write, because all the programs they have been trained on are flawed as well.
As a fun exercise, you could give a coding agent a hunk of non-trivial software (such as the Linux kernel, or postgresql, or whatever), and tell it over and over again: find a flaw in this, fix it. I'm pretty sure it won't ever tell you "now it's perfect" (and do this reproducibly).
chromacity 23 hours ago [-]
If humans can find bugs, why can't humans write flawless code?
Whatever the answer to that conundrum might be, LLMs are trained on these patterns and replicate them pretty faithfully.
jonhohle 23 hours ago [-]
Have you ever met a security engineer? I’ve never met one who was also a good engineer (not saying they don’t exist, I just haven’t met one). Do they find vulnerabilities? Sure. Could they write the tools they use to find vulnerabilities, most probably not.
tetha 20 hours ago [-]
How do you define flawless though?
The CVEs here have their fair share of silly C problems, but also more rigid input validation and handling. These more rigid validations exclude stuff which may even be valid by the spec, but entirely problematic in practice.
As examples, take a look how many valid XML documents are practically considered unsafe and not parsed, for example due to recursive entity expansion. This renders the parsers not flawless and in fact not in spec.
Or, my favorite bait - there should be a maximum length limit on passwords. Why would you ever need a kilobyte sized password?
_flux 23 hours ago [-]
Just because something is good at finding bugs, it may not find all the bugs. Finding a bug only tells you there was one bug you found, it doesn't tell if the rest is solid.
duped 22 hours ago [-]
You could argue the answer to this question depends on if you believe P=NP
darig 2 hours ago [-]
[dead]
tscburak 22 hours ago [-]
[flagged]
cedum 23 hours ago [-]
[dead]
mrbluecoat 23 hours ago [-]
> The tsunami of AI-generated bug reports shows no signs of stopping, so it is likely that this process will have to be repeated again soon.
The vast majority of vulnerabilities found recently are directly related to being written in memory unsafe languages, it's very difficult to justify that a DNS/DHCP server can't be written in rust or go and without using unsafe (well, maybe a few unsafe calls are still needed, but these will be a very small amount)...
Maybe coreutils is so old that most security vulnerabilities was solved before CVE even existed. But I think this is also a good argument why we are replacing a solid piece of C code to Rust just because it is "memory safe" and then have lots of CVEs related to things like TOCTOUs (that Rust will not save you).
Other than security, Rust brings major improvement to the tooling and may help bring fresh members that wouldn't want to contribute to C code. I understand why some projects go that route
But it loses old members who don't program in rust, already know the projects, all the reasons of why "this thing" was done "that way". and introduces a new set of bugs, plus now you have two versions of the same thing to maintain.
Yes, you can go further, possibly faster. OTOH, nothing replaces experience and in-depth knowledge. GNU Coreutils embodies that knowledge and experience. uutils has none, and just tries to distill it with tests against the GNU one.
...and they get 44 CVEs as a result in their first test.
Iirc the bugs had to do with linux system details like fs toctou and other things you'd only find out about in production.
Ideally we'd have a better way of navigating platform idiosyncrasies or better system APIs, so that every project doesn't have to relearn them at runtime. But the rewrite isn't pure downside.
> Ideally we'd have a better way of navigating platform idiosyncrasies or better system APIs
I believe trying to make something idiot-proof just generates better idiots, so I prefer having thinner abstractions on the lower level for maintenance, simplicity and performance reasons. The real solution is better documentation, but who values good documentation?
Graybeards and their apprentices, mostly from my experience. I personally still live with reference docs rather than AI prompts, and it serves me well.
Not saying all of them were about FS TOCTOU bugs but once I got to these, that was my takeaway.
Obviously just using Rust cannot fix _all_ bugs, and I reject any criticisms towards Rust rewrites that tear down this particular straw man (its goal being to make it impossible to argue against). That's toxic and I get surprised every time people on HN try to argue in that childish way.
But if we can remove all C memory safety foot guns then that by itself is worth a lot already.
Losing decades-old knowledge on how the dysfunctional lower-level systems work would be regrettable and even near-fatal for any such projects. That I'd agree with. But it also raises the question on whether those lower-level systems don't need a very hard long look and -- eventually -- a replacement.
New projects wearing an another project's skin have always bothered me - regardless of language. Ubuntu did a similar thing way back with libav masquerading as ffmpeg.
I'm very happy to work with multiple programming languages without getting religious about any of them. They all have drawbacks, Rust included of course.
However, just my mere skepticism about the existence of the "violent proselytizing for Rust" of course immediately had me put in some imaginary group of fanatics. Which is of course normal. People love their binary camps and nuance and discission about merits be damned.
There's certainly a fanatic group of Rust developers who really want to eradicate C and C++ from the people's knowledge and all codebases in this universe, so far so openly hating the developers and designers of the said languages.
Same was (or still is) true for some LLVM/clang people w.r.t. GCC.
This is why I use neither.
I'm always happy to discuss PLT and merits of programming languages with neutral parties, even in lively fashion, but when open-mindedness gets thrown out of the window, I do leave the room.
These kinds of healthy discussions will benefit both parties. Hubris, ego, closed-mindedness and fanaticism won't.
Related: What Killed Smalltalk Could Kill Ruby, Too: https://www.youtube.com/watch?v=YX3iRjKj7C0
I am genuinely curious where this fanatic group is. Where are you witnessing them?
I'm all for gradual improvements but at one point and on we should zoom even further out and pick our battles well.
You're describing WinFS, which looked into and ultimately abandoned Microsoft 20 years ago. I'm sure other groups have looked into this as well, but there's no such thing as free lunch.
> I'm all for gradual improvements but at one point and on we should zoom even further out and pick our battles well.
That sounds a lot like picking up more battles, yet we all still have 24 hours a day. Recursively trying to perfect lower layers will have you like Hal changing the lightbulb https://youtu.be/AbSehcT19u0
As a guy who prefers to stop and think before coding, to me a lot of the older UNIX / GNU primitives seem broken (like the env vars process inheriting discussion that was here a while ago) and should be completely rethought. I also think people overreact and believe "everything will break". And we have libraries and runtimes that only implement small parts of libc and the deployed apps that use them are running mostly fine for years.
My broader point was: shall we not start breaking away from all this legacy? Must we always rely on corporations to lead the charge?
But yes, I do of course agree with the only 24h a day thing. And likely nobody would want to pay for such a trail-blazing work anyway. Sad world.
AI Security researchers at least do something. If it was so easy to rewrite everything in rust, I don't know why the response to this incidents isn't a rock solid replacement in rust, the next day.
I tell you why that is. Working on these things doesn't give you stars on github.
People seem to think that rewriting in rust just magically fixes all issues, but that's not how it works (See recent uutils CVEs). Rewrites tend to have more bugs because the code is new and hasn't been reviewed as much.
Given a comprehensive test suite for the original, probably, yes. if the test suite isn't great, you are still going to spend a lot of time/tokens chasing edge cases.
> that's 100% security bug for security bug compatible with the original
You can do this part without AI. c2rust will give you a translation that retains all the security bugs (and all the memory unsafety). The hope is that the AI in the loop will let you convert it to idiomatic rust (and hence avoid the memory unsafely, and in doing so, also resolve some of the security issues).
> If it was so easy to rewrite everything in rust, I don't know why the response to this incidents isn't a rock solid replacement in rust, the next day.
Meaning that AI/Rust enthusiasts are supposed to supply solutions. Of course they won't.
Citations and links, please.
Though I wonder why.
Go ahead and ask your AI to make it. What's stopping you?
Based on their comment I guess they are worried they won't earn enough stars on github
(mostly unrelated to topic at hand though)
Oh very much so! In my mind, it seems that someone must have figured out what the universe was for, and now it's been replaced with something even more bizarre and inexplicable.
"a remote attacker capable of asking DNS queries or answering DNS queries can cause a large OOB write in the heap."
Malformed DNS response causes "infinite loop and dnsmasq stops responding to all queries."
Malicious DHCP request can cause buffer overlow.
Answer: no, but they're working on it.
https://forum.openwrt.org/t/dnsmasq-set-of-serious-cves/2500...
https://github.com/mirror/dd-wrt/issues/465
https://svn.dd-wrt.com/changeset/64944
https://svn.dd-wrt.com/changeset/64905
The release is "coming soon".
A lot of these systems that are getting hit, and will probably continue to be hit over the next few weeks or months, have a similar story. The Linux kernel's only other potentially viable choice was C++ at the time. OpenSSL, a perennial security offender, was started in 1998. You can look up your own favorite major system library with major security issues and it's probably the same story.
I'm as aggressive as anyone about saying "don't write a new project in C for network access", but cast me back to 1998 and I couldn't tell you what other viable choices there are either. There are safer languages, but they were much, much smaller than the C community, and I couldn't promise you how stable they were either. Java was out, and I don't know when to draw the exact line as to when it became a serious contender for a network server, but late 200Xs would be my guess; certainly what I saw in 1999 wasn't yet.
Example: I ran a Haskell network server in 2011 for something relatively unimportant and it fell over under conditions that would not have been very extreme for a production network; I know it was Haskell and not my code because I reused the same code base in 2013 with no changes in the core run loop and it did about 90% better; still not enough that I would have put that system into a real production use case but enough to show it wasn't my code failing. So while Haskell may have existed in the 200Xs, it wouldn't have qualified as a viable choice for a network server at the time.
There's a lot more viable choices today than there used to be.
Plus of course they are slower and bigger.
My own MaraDNS has been extensively audited now that we’re in the age of AI-assisted security audits.
Not one single serious security bug has been found since 2023. [1]
The only bugs auditers have been finding are things like “Deadwood, when fully recursive, will take longer than usual to release resources when getting this unusual packet” [2] or “This side utility included with MaraDNS, which hasn’t been able to be compiled since 2022, has a buffer overflow, but only if one’s $HOME is over 50 characters in length” [3]
I’m actually really pleased just how secure MaraDNS is now that it’s getting real in depth security audits.
[1] https://samboy.github.io/MaraDNS/webpage/security.html
[2] https://github.com/samboy/MaraDNS/discussions/136
[3] https://github.com/samboy/MaraDNS/pull/137
I fixed CVE-2014-5461 for Lunacy back in 2021:
https://github.com/samboy/lunacy/commit/4de84e044c1219b06744...
This is discussed here:
https://samboy.github.io/MaraDNS/webpage/security.html#CVE-2...
In addition, I have done other security hardening with Lunacy compared to Lua 5.1:
https://samboy.github.io/MaraDNS/webpage/lunacy/
Now, I should probably explain why I’m using Lua 5.1 instead of the latest “official” version of Lua. Lua has an interesting history; in particular Lua 5.1 is the most popular version and the version which is most commonly used or forked against. Adobe Illustrator uses Lua 5.1, and Roblox uses a fork of Lua 5.1 called “luau”. LuaJIT is based on Lua 5.1, and other independent implementations of Lua (Moonsharp, etc.) are based on versions mostly compatible with Lua 5.1.
Lua 5.1 has a remarkably good security history, and of course I take responsibility for any security bugs in the Lua 5.1 codebase since I use the code with the relatively new coLunacyDNS server (Lua 5.1 isn’t used with the MaraDNS or Deadwood servers).
Lua 5.1 is used to convert documentation, but those scripts are run offline and the converted documents are part of the MaraDNS Git tree.I'm not convinced that vendoring, instead of embedding, is the right way.
The patch landing in 2021, instead of 2014, being one of those concerns.
(And you might want to recheck your assumption of how big 'int' will be, for rg32. C defines it in terms of minimum size, not direct size. int16_t isn't necessarily an alias.)
What makes you think I was using Lua in 2014? Seriously, do you even know how to use “git log”?
I added Lua to MaraDNS in 2020:
https://github.com/samboy/MaraDNS/commit/2e154c163a465ee7ead...
I patched it on my own in 2021:
https://github.com/samboy/MaraDNS/commit/efddb3a92b9cee30f11...
>>>you might want to recheck your assumption of how big 'int' will be
uint32_t is always 32-bit:
https://en.cppreference.com/c/types/integer
And, yes, this can be easily checked with a tiny C program:
If there’s a system where uint32_t is 64 bits, that’s a bug with the compiler (which isn’t following the spec), not MaraDNS.Are you going to make any other negative false implications about MaraDNS? Because you’re making a lot of very negative accusations without bothering to check first.
Edit: Here’s a version of the above C program which works in tcc 0.9.25:
... It was fixed, upstream, in 2014. Thanks for not checking the number at the start of the CVE, before launching straight into attack mode.
https://www.lua.org/bugs.html#5.2.2-1
Which is the point. In 2020, when you added Lua, you added a vulnerability that had officially been fixed for six years. Because you vendored, and did not depend on any system package.
> uint32_t is always 32-bit:
Yah. Which is why I said 'int'.
As in the assumptions you made here:
https://github.com/samboy/LUAlibs/blob/master/rg32.c#L59
That number is a 32-bit number in the C code, but it’s converted in to a 16-bit number. I used “int” to have it interface with other Lua code, but safely assume “int” can fit 16 bits, and yes I do convert the number to a 16-bit one before passing it off to other Lua code:
https://github.com/samboy/LUAlibs/blob/master/rg32.c#L77
Here, I assume lua_number can pass 32 bits:
https://github.com/samboy/LUAlibs/blob/master/rg32.c#L45
https://github.com/samboy/MaraDNS/blob/master/coLunacyDNS/lu...
https://github.com/samboy/lunacy/blob/master/src/lmathlib.c#...
But it works without issue:
One sees “b0e6725c”, i.e. a 32-bit unsigned numberLikewise:
Gives us “b0e6 725c”.Vendoring Lua 5.1 was forced; since I wanted to use Lua 5.1 (for reasons described above, e.g. LuaJIT compatibility), I had to use code which hasn’t been updated upstream since 2012.
It's important to look at the actual vulnerability at the context, and not just list any CVE which matches by version.
MaraDNS has three components:
• MaraDNS, the authoritative server, which goes back all the way to 2001
• Deadwood, the recursive server, which was started back in 2007
• coLunacyDNS, which allows a DNS server to use Lua scripting; this didn’t exist until the COVID pandemic
Neither MaraDNS nor Deadwood use Lunacy (except as a scripting engine for converting documents); only coLunacyDNS uses Lunacy. coLunacyDNS uses a sandboxed and security hardened version of Lunacy (and, yes, I would accept bugs where someone could escape that sandbox), and the Lua scripts which coLunacyDNS uses can only be controlled by a local user and there is no capability to run Lua scripts remotely.
Why would a DNS server use Lua scripting? Is this for dynamically responding to requests rather than doing a pure lookup?
More discussion is on the coLunacyDNS overview page:
https://samboy.github.io/MaraDNS/coLunacyDNS/
If I can find a CVE that _may_ affect the stack in five minutes, what _actual_ problems lurk there?
You vendor Lua - thus, it _is_ your responsibility to review every Lua CVE. You've set yourself up as the maintainer by vendoring.
See this, for example:
https://samboy.github.io/MaraDNS/webpage/security.html#CVE-2...
Unfortunately, that's not enough. Even if the vulnerable parts of the code are not being built, heck even if they have been completely erased from the source code, the auditors will still insist that you're vulnerable and must immediately upgrade, or else they will give your software a failing grade.
"Well, sure, this component is insecure but an attacker can't reach it" is like a 50%+ positive signal for an unexpected privilege elevation bug.
I have several libraries that I've written. Not one single serious security bug in them has been found since 1991. Granted, nobody uses my libraries...
Not to diminish your team's achievement! :D But it's important to contextualize claims like this with information about what your userbase looks like
For example, when the Ghost Domain Name DNS vulnerability was discussed, MaraDNS was audited and named (MaraDNS was immune to the security bug, for the record)
https://web.archive.org/web/20120304054959/https://www.isc.o...
The question is a matter of impact because of how used the software is.
Out of curiosty: what is the point you’re trying to make? That there are alternatives to dnsmasq? That somehow your software is “better”?
This plug provides zero value to the dnsmasq discussion.
As others have pointed out: the more used a software is, the more scrutiny it gets and more bugs or edge cases are found.
The main advantage of writing in C over Rust here in 2026 is that C has two different Lua interpreters, and there isn’t a port of Lua to Rust yet; [1] yes, there are ways to use the C version of Lua in Rust, but that’s different.
If I were to write a new server today, I could very well write it in Go, then use GopherLua for the Lua engine:
https://github.com/yuin/gopher-lua
Although, even here, the advantage of C is that I could increase performance by using LuaJIT:
https://luajit.org/luajit.html
[1] If I were to use Rust, I would consider using Rune as an embedded language as per https://rune-rs.github.io/
dnsmasq has served me well for like an eternity in multiple setups for different use cases. As all software it has bugs. And once located those get fixed. Its author is also easy to communicate with.
Why should I switch over to something way less proven? I'm quite sure your software also has bugs, many still not located. Maybe because it's less popular/ less well known nobody cares to hunt for those bugs? Which means even if the numbers of found bugs is less in your software at the moment, and it may look more audited for this reason, it may actually be way less secure.
Demonstrably some software has fewer bugs, and its authors are often hated, especially if they are a lone author like Bernstein. Because it must not happen!
Projects with useless churn and many bug reports are more popular because only activity matters, not quality.
The point DJB made was this: It was possible for a skilled C programmer to make a server with few security holes. Even though that’s not as relevant now, with Rust having most of the speed of C and security built in, it did make the Internet a safer place for many years. I remember using Qmail and DJBdns to make the servers at the small company I worked for at the time more secure.
I haven’t noticed antipathy, but I have noticed skepticism. I assume people with outlier records in any field get some extra inspection.
If it becomes jealousy-fueled not-picking, those people are insecure jerks. But unusual track records are worth understanding.
It's not! It's the foundation of all dev AI products marketing.
It’s not normal for software to be so poorly written, one doubts the claim that a security bug hasn’t been found in over three years. If one thinks the claim of no security bugs of consequence in three years is dubious, feel free to do a security audit of MaraDNS (or DjbDNS, which I also will take responsibility for even though my software is, if you will, a “competitor” to DjbDNS), and report any bugs you find.
Speaking of DJB, DjbDNS has had a few security bugs over the years (but not that many), but I’m maintaining a fork of DjbDNS with all of the security bugs I know about fixed:
https://github.com/samboy/ndjbdns
I am saying all this as someone who has had significant enough issues with DJB’s software, I ended up writing my own DNS server so I didn’t have to use his server (I might not had done so if DjbDNS was public domain in 2001, but oh well).
(As a matter of etiquette, it’s a little rude to claim someone is saying something “dubious”, especially when the claim is backed up with solid evidence [multiple audits didn’t find anything of significance in the last year, as I documented above], unless you have solid evidence the claim is dubious, e.g. a significant security hole more recent than three years old)
Can you back that claim up with at least some sort of theory? Because it doesn't match my perception of the real world, nor does it match my mental model of how CVEs happen.
https://samboy.github.io/MaraDNS/webpage/DNS.security.compar...
Also, my sister post: https://news.ycombinator.com/item?id=48112042
I had believed (and continue to hold) DNS software containing, e.g., an authoritative DNS server which lacks native TCP or DNSSEC support falls squarely into the "narrowly scoped" bucket and would appreciate if you'd not try to decide my opinion for me on any given project in the future.
In an era when DNS was otherwise a monoculture, djbdns was a welcome breath of fresh air.
https://lwn.net/2001/0208/
You literally write fewer instead of none, therefore agreeing with the sentence you claimed to say is meaningless.
Must they prove their software to you? They're offering an alternative, not bargaining for a deal.
• The software has been around for 25 years
• The software is popular enough to have been subjected to dozens of security code audits, including two audits in the post-AI era
• In those 25 years, only two remote “packet of death” bugs have been found
• Also, in those same 25 years, only one single bug report of remotely exploitable memory leaks has been found
This isn’t something which, as implied here, has a lot of security bugs only because no one has used or audited the software. This is a long term, mature code base which has only had a few serious security bugs in that timeframe.
Here is my evidence:
https://samboy.github.io/MaraDNS/webpage/security.html
If this evidence isn’t “convincing” to you, I don’t know what evidence would be “convincing”.
To illustrate the issue with an extreme example, consider that a disused repository on github full of security holes is highly unlikely to have any CVEs regardless of age. The software has to present a worthwhile target (ie have a substantial long term userbase) before anyone will bother to look for exploits. (I guess that might change in the near future thanks to AI but I don't think we're there just yet.)
MaraDNS is a worthwhile target; two people have been auditing it this year, in fact:
https://github.com/samboy/MaraDNS/pull/137
https://github.com/samboy/MaraDNS/security/advisories/GHSA-c...
I concur. The last part, however, is quite worrisome. Dnsmasq is ran by one person, published on their own git and I did not see any information about other maintainers.
It is a super important (and great, and useful, and everything) software and i have fears of what will happen one day.
Sure, someone can clone and push to github but it may seriously fragment the ecosystem.
In my experience projects lead by large corporates burnt me a lot more in the past and caused more serious friction in my setups (e.g. breaking backwards compatibility for the sake of killing 5 lines of code that could cause some extra "development costs".)
Anyway… that's not saying one is better than the other. Trust into a project builds different over time (unrelated to the size of the development team).
---
Seeing it here, how someone "shamelessly" (in their own words) adverts their own competing project and then uses dummy accounts to bend the voting and discussion in their favouring… that's definitely NOT how trust is build up. It's something which instantly makes me stay away from a project (better or not).
This is a false accusation with no evidence to back it up. Let me state this clearly: I am not using sockpuppet accounts nor am I stacking the vote.
Ycombinator is a secure site and @dang does not allow sockpuppets nor stacked voting.
What you are seeing is the hacker spirit of the Ycombinator community: Hackers believe in software diversity, and strongly oppose monoculture, so welcome people who bring up and discuss alternative software.
Some projects die because the dev abandons them (slowly or abruptly). Usually you see this happening with time and have the time to turn around.
The bus factor is drastic. One day the project lives and the next day it is gone. There is nobody anymore to push PRs etc. As I said, you can have it picked up via a fork and hope for the best (= that current users will somehow know). havng a backup contibutor eevn just to make the transition is a nice thing to have.
> Seeing it here, how someone "shamelessly" (in their own words) adverts their own competing project and then uses dummy accounts to bend the voting and discussion in their favouring… that's definitely NOT how trust is build up. It's something which instantly makes me stay away from a project (better or not).
Not sure how this relates to my comment?
But I doubt it, they will lazily backport these patches to create some frankenstein one-off version and be done with it.
Before anyone says "tHaT's wHaT sTaBlE iS fOr": they have literally shipped straight-up broken packages before, because fixing it would somehow make it not "stable". They would rather ship useless, broken code than something too new. It's crazy.
The thing to complain about is if the version in testing is ancient.
FWIW the fixes referenced here are already fixed in trixie: https://security-tracker.debian.org/tracker/source-package/d...
That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time. Not to mention, you think the C of today is bad? Have you looked at old C?
And the disadvantage is that backporting is manual, resource intensive, and prone to error - and the projects that are the most heavily invested in that model are also the projects that are investing the least in writing tests and automated test infrastructure - because engineering time is a finite resource.
On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport. And then things just get worse and worse.
We'd all be a lot better off if certain projects took some of the enthusiasm with which they throw outrageous engineering time at backports, and spent at least some of that on automated testing and converting to Rust.
That's not what it's about.
What it's about is, newer versions change things. A newer version of OpenSSH disables GSSAPI by default when an older version had it enabled. You don't want that as an automatic update because it will break in production for anyone who is actually using it. So instead the change goes into the testing release and the user discovers that in their test environment before rolling out the new release into production.
> On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport.
They're not alternatives to each other. The stable release gets the backported patch, the next release gets the refactor.
But that's also why you want the stable release. The refactor is a larger change, so if it breaks something you want to find it in test rather than production.
So when you do update and get that GSSAPI change, it comes with two years worth of other updates - and tracking that down mixed in with everything else is going to be all kinds of fun.
And if you're two years out of the loop and it turns out upstream broke something fundamental, and you're just now finding out about it while they've moved on and maybe continued with a redesign, that's also going to be a fun conversation.
So if the backport model is expensive and error prone, and it exists to support something that maybe wasn't such a good idea in the first place... well, you may want something, but that doesn't make it smart.
Updated what, specifically in production?
If you need a newer version of Python or Postgres or whatever it is possible to install it from third-party repos or compile from source yourself. But having a team of folks watch all the other code out there is a load off my plate: not worrying about libc, or OpenSSH, or OpenSSL, or zlib, or a thousand other dependencies. If I need the latest version for a particular service I would install that separately, but otherwise the whole point of a 'packagized' system is to let other folks worry about those things.
> So when you do update and get that GSSAPI change, it comes with two years worth of other updates - and tracking that down mixed in with everything else is going to be all kinds of fun.
I've done in-place upgrades of Debian from version 5 to 11 at my last job on many machines, never once re-installing from scratch, and they've all gone fine.
Further, when updates come down from the Debian repos I don't worry about applying them because I know there's not going to be weird changes in behaviour: I'm more confident in deploying things like security updates because the new .deb files have very focused changes.
One is security updates and bug fixes. These need to fix the problem with the smallest change to minimize the amount of possible breakage, because the code is already vulnerable/broken in production and needs to be updated right now. These are the updates stable gets.
The other is changes and additions. They're both more likely to break things and less important to move into production the same day they become public.
You don't have to wait until testing is released as stable to run it in your test environment. You can find out about the changes the next release will have immediately, in the test environment, and thereby have plenty of time to address any issues before those changes move into production.
That's where you're wrong. They're not one and the same.
Debian stable often defers non-security bug fixes for up to two years by playing this game.
I'm not interested in new features unless they make things actually work.
Debian stable time and again favors broken over new. Broken kernels, broken packages. At least they're stable in their brokenness.
Hence my complaint.
But I have noticed far more broken in distro that DOES backport features, RHEL/Centos. So many that we migrated away from it, when they backported a driver bug into centos 5 and then did the same backport of a bug for centos 6.
Also rebuilding package is trivial if you don't agree with what should and should not go into stable version
But two years is impractical and Debian gets a ton of friction over it. Web browsers and maybe one or two other packages are able to carve out exceptions, because those packages are big enough for the rules to bend and no one can argue with a straight face that Debian is going to somehow muster up the manpower to do backports right.
But for everyone else who has to deal with Debian shipping ancient dependencies or upstream package maintainers who are expected to deal with bug reports from ancient versions is expected to just suck it up, because no one else is big enough and organized enough to say "hey, it's 2026, we have better ways and this has gotten nutty".
Maybe the new influx of LLM discovered security vulnerabilities will start to change the conversation, I'm curious how it'll play out.
They are not expected to deal with this. This is the responsibility of the Debian package maintainer.
If you (as an upstream) licensed your software in a manner that allows Debian to do what it does, and they do this to serve their users who actually want that, you are wrong to then complain about it.
If you don't want this, don't license your software like that, and Debian and their users will use some other software instead.
I think you need to chill out. Relicensing the way you suggest would be _quite_ the hostile act, and I'm not going to that either. But I am an engineer, so of course I'm going to talk about engineering best practices when it comes up.
You don't have to take it as an attack on your favorite distro - that really does pee in the pool of the upstream/downstream relationship between distros and their upstream.
The trouble is you seem to be assuming that best practices for you, in your opinion, also apply to everyone else. They don't. Not everyone sees things the way you do or is facing the same issues or is making the same set of tradeoffs. There are downsides to what debian does but there are also upsides.
At this point, given the plethora of high quality options available as well as how easy it is to mix and match them on the same system thanks to container-related utilities and common practices I really don't think there's any room for someone who doesn't like the debian model (ie in general, as opposed to targeted objections) to complain about how they do things. If you want cutting edge userspace on debian stable at this point you have at least 3 options between nix, guix, and gentoo. There's also flatpak and snap which come built in.
I wager it's only a matter of time before we see a mass rooting event that hits Debian hard while everyone running something more modern has already been patched.
I think that might be what cuts down on the grandstanding about "freedoms" and "that's how we've always done things". You certainly are, up until it becomes a public nuisance.
Why would you expect LLMs not to be simultaneously leveraged to catch backports that were missed or inadvertently broken?
Given recent headlines I think it's far more likely that we see a mass rooting event hit one or more of the bleeding edge rolling release distros or language ecosystems due to supply chain compromise. Running slightly out of date software has never been more attractive.
OpenBSD in particular can use competent developers to fix their dogshit filesystem.
I assure you, enormous sums of people prefer Debian the way it is. I do not, ever, want "new stuff" in stable. I have better things to do than fight daily change in a distro, it's beyond a waste of time and just silly.
If you want new things, leave stable alone, and just run Debian testing! It updates all the time, and is still more stable than most other distros.
Debian is the way it is on purpose, it is not a mistake, not left over reasoning, and nothing you said seems relevant in this regard.
For example, there is no better way than backporting, when it comes to maintaining compatibility. And that's what many people want.
> And if you're two years out of the loop and it turns out upstream broke something fundamental, and you're just now finding out about it while they've moved on and maybe continued with a redesign, that's also going to be a fun conversation.
Having that sprung on you because you decided to run everything on latest is worse.
"Oh we have CVE, we now need to uproot everything because new version that fixes it also changed shit"
With release every year or two you can *plan* for it. You are not forced into it as with "rolling" releases because with rolling you NEED to take in new features together with bugfixes, but with Debian-like release cycle you can do it system by system when new version comes up and the "old" one still gets security fixes so you're not instantly screwed.
> So if the backport model is expensive and error prone, and it exists to support something that maybe wasn't such a good idea in the first place... well, you may want something, but that doesn't make it smart.
It exists in that format because people are running businesses bigger than "a man with a webpage deployed off master every few days"
Doing terrible work every 2 years is better than doing it every day?
LetsEncrypt has been a great example of this in certificate management.
And by skipping some releases, you will have less of that work. When something is changed in one release, then changed again on the next one, by waiting you only have to do the change once, instead of twice. And sometimes you don't even have to do anything, when something is introduced in one release and reverted in the next one.
If you want the rolling release like distro, just run debian unstable. That's what you get. It's on par with all the other constantly updated distros out there. Or just run one of those.
Also, Debian stable has a lifetime a lot longer than 2 years, see https://www.debian.org/releases/. Some of us need distros like stable, because we are in giant orgs that are overworked and have long release cycles. Our users want stuff to "just work" and stable promises if X worked at release, it will keep working until we stop support. You don't add new features to a stable release.
From a personal perspective: Debian Stable is for your grandparents or young children. You install Stable, turn on auto-update and every 5-ish years you spend a day upgrading them to the next stable release. Then you spend a week or two helping them through all the new changes and then you have minimal support calls from them for 5-ish years. If you handed them a rolling release or Debian unstable, you'd have constant support calls.
Personally, If the hardware is working great, seems like a waste of money replacing it, just to upgrade software. Especially with Debian oldstable -> Debian stable where it's usually quite easy and painless.
The problem with this take is that it’s stuck in the early 2000’s, where all servers are pets to be cared for and lovingly updated in place.
It’s also circular: you have the same problem with the current model if you don’t have a test environment. And if you do have a test environment, releases can be tested and validated at a much higher cadence.
Debian patches defaults in OpenSSH code so it behaves differently than upstream.
They shouldn't legally be allowed to call it OpenSSH, let alone lecture people about it.
Let them call their fork DebSSH, like they have to do with "IceWeasel" and all the other nonsense they mire themselves into.
When you break software to the point you change how it behaves you shouldn't be allowed to use the same name.
Some people will even run Debian on the desktop. I would never, but some people get real upset when anything changes.
Debian does regularly bring newer versions of software: they release about every two years. If you want the latest and greatest Debian experience, upgrade Debian on week one.
From your description, you seem to want Arch but made by Debian?
Isn't that essentially Debian unstable (with potentially experimental enabled)? I've been running Debian unstable on my desktops for something like 20 years.
But that does nothing for people who write and support code Debian wants to ship - packaging code badly can create a real mess for upstream.
And despise the name is probably more stable than vast majority of rolling release distros
The automatically tested Debian release is called Debian Testing. And it is stable enough.
Debian Stable is basically "we target particular release with our dependencies instead of requiring customer to update entire system together with our software". That model works just fine as long as you don't go too far back.
> On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport. And then things just get worse and worse.
Narrator: It turned out things were not getting worse, they were just fine.
> We'd all be a lot better off if certain projects took some of the enthusiasm with which they throw outrageous engineering time at backports, and spent at least some of that on automated testing and converting to Rust.
That project is RedHat, not Debian, they backport entire features back to old versions (together with bugs!)
For what you want, there are other distributions for that. Debian also has stable-backports that does what you want.
No need to rage on distributions that also provide exactly what their users want.
Don't get me wrong, I use and encourage extensive automated testing. However only extensive manual testing by people looking for things that are "weird" can really find all bugs. (though it remains to be seen what AI can do - I'm not holding my breath)
I use Arch on my laptop, when I got it 2 years ago the amd gpu was a bit new so it was prudent to get the latest kernel, mesa, everything. Since I use it daily it's not bad to update weekly and keep on top of occasional config migrations.
I use Debian stable on my home server, it's been in-place upgraded 4-ish times over 10 years. I can install weekly updates without worrying about config updates and such. I set up most stuff I wanted many years ago, and haven't really wanted new features since, though I have installed tailscale and jellyfin from their separate debian package repos so they are very current. It does the same jobs I wanted it to do 8 years ago, with super low maintenance.
But if you don't want Debian stable, that's fine. Just let others enjoy it.
Nowadays, even with Ubuntu’s two year or so release cycle I have to use 3rd party packages to have up to date software (PHP being one) and not some version from three years ago.
We no longer live in a world (with few exceptions) where running a 3-5 year old distribution (still supported) makes sense.
If I was to run dnsmasq on Debian, it would be in a container. Since I run Pihole (in a container), it kinda is.
Irrelevant strawman, since you're not accusing the dnsmasq package in Debian stable of being straight-up broken.
DHCP and DNS are connected, PXE requires DHCP entries, so to do a simple setup you'd need to glue together at least 3 daemons otherwise, all with different config syntax
10/10, no regrets, would recommend.
Is that the Linux way you are on about? No obviously not 8)
I think you mean the "unix idealized but never really happened exactly but we are quite close if you squint a bit ... way" where each tool does one job well and the pipeline takes up the slack.
I never understood why some projects get extremely popular and others don't. I also suspect by now that the reports by tools that are "too dangerous to release" scan all projects but selectively only contact those with issues, so that they never have to admit that their tool didn't find anything.
It's in popular projects.
It is a distorted view, because projects become popular by allowing indiscriminate commits, bugs, maintainers.
If I'd start a new project I'd allow anyone in and blog about 100 exploits every year, because that is exactly what people want. I'm serious.
Hopefully!
But, ai-deniers are telling us there is nothing to see ...
CVE-2026-2291 Heap buffer overflow, Infinite loop, Integer underflow, Heap buffer overflow ..
What else can they do, assuming the computers behind the router are all patched up.
It's definitely bad.
why can't machine-learning write a product from scratch that is flawless?
sure buddy
Flawless software is hard for an LLM to write, because all the programs they have been trained on are flawed as well.
As a fun exercise, you could give a coding agent a hunk of non-trivial software (such as the Linux kernel, or postgresql, or whatever), and tell it over and over again: find a flaw in this, fix it. I'm pretty sure it won't ever tell you "now it's perfect" (and do this reproducibly).
Whatever the answer to that conundrum might be, LLMs are trained on these patterns and replicate them pretty faithfully.
The CVEs here have their fair share of silly C problems, but also more rigid input validation and handling. These more rigid validations exclude stuff which may even be valid by the spec, but entirely problematic in practice.
As examples, take a look how many valid XML documents are practically considered unsafe and not parsed, for example due to recursive entity expansion. This renders the parsers not flawless and in fact not in spec.
Or, my favorite bait - there should be a maximum length limit on passwords. Why would you ever need a kilobyte sized password?
Welcome to the new world order.