

Neat! I’d heard of mutualistic mycorrhizal relationships, but not mycoheterotrophy
Just a basic programmer living in California
Neat! I’d heard of mutualistic mycorrhizal relationships, but not mycoheterotrophy
Open the sidebar by clicking the three-lines icon at the top right. The translate option is at the bottom of the sidebar.
For the sake of benefit of the doubt, it’s possible to simultaneously understand the thesis of the article, and to hold the opinion that AI doesn’t lead to higher-quality products. That would likely involve agreeing with the premise that laying off workers is a bad idea, but disagreeing (at least partially) with the reasoning why it’s a bad idea.
Yeah, the article seems to assume AI is the cause without attempting to rule out other factors. Plus the graph shows a steady decline starting years before ChatGPT appeared.
That’s a good point! The string is in there, and I can see it with strings
. But in my research so far it’s looking like making a simple string substitution might not be an option. The replacement string would be a Nix store path which would be longer. That would shift over subsequent bytes in the binary which it sounds like would produce alignment issues that would break things.
Apparently it’s ok to change the length of the ELF header, which is what patchelf does. But shifting bytes in the ELF body is a problem.
Now what I haven’t verified yet is whether the embedded binary is in the body or in the header. If it’s in the header - or even if just the interpreter string is in the header then I might be good to go.
Write a NixOS module, publish it in a flake in the nixosModules
output, import that flake in your flake-based NixOS configs.
I use obsidian.nvim. It’s a Neovim interface to my Obsidian vaults, so I can work on my knowledge base in whichever app works best in the moment.
Oh, this is a good tip!
It comes down to, what can be done or pre-generated at build or publish time versus what must be done at runtime (such as when a viewer accesses a post)? Stuff that must be done at runtime is stuff you don’t have the necessary information to do at publish time. For example you can’t pre-generate a comments section because you don’t know what the comments will be before a post is published.
For stuff like email digests and social media posts I might set up a CI/CD system (likely using Github Actions) that publishes static content, and does those other tasks at the same time. Or if I want email digests delivered on a set schedule instead of at publish time I might set a scheduled workflow in the same CI/CD system. Either way you can have automation that is associated with your website that isn’t directly integrated with your web server.
As you suggest some stuff that must be done at runtime can be done with frontend Javascript. That’s how I implement comments on my static site. I have Javascript that fetches a Mastodon thread that I set up for the purpose, and displays replies under the post.
I don’t exactly follow your first and fourth requirements so it’s hard for me to comment more specifically. Transforming information from CSVs to HTML sounds like something that could naturally be done at build time if you have the CSVs at build time. But I’m not clear if that’s the case in your situation.
It seems to me that you’re asking about two different things: zero-knowledge authentication, and public key authentication. I think you’d have a much easier time using public key auth. And tbh I don’t know anything about the zero-knowledge stuff. I don’t know what reading resources to point to, so I’ll try to provide a little clarifying background instead.
The simplest way to a authenticate a user if you have their public key is probably to require every request to be signed with that key. The server gets the request, verifies the signature, and that’s it, that’s an authenticated request. Although adding a nonce to the signed content would be a good idea if replay attacks might be a problem.
If you want to be properly standards-compliant you want a standard “envelope” for signed requests. Personally I would use the multipart/signed MIME type since that is a ready-made, standardized format that is about as simple as it gets.
You mentioned JSON Web Tokens (JWTs) which are a similar idea. That’s a format that you might think you could use for signing requests - it’s sort of another quasi-standardized envelope format for signed data. But the wrinkle is that JWTs aren’t used to sign arbitrary data. The data is expected to be a set of “claims”. A JWT is a JSON header, JSON claims, and a signature all of three which are serialized with base64 and concatenated. Usually you would put a JWT in the Authorization header of an HTTP request like this:
Authorization: Bearer $jwt
Then the server verifies the JWT signature, inspects the “claims”, and decides whether the request is authorized based on whether it has the right claims. JWTs make sense if you want an authentication token that is separate from the request body. They are more complicated than multipart/signed content since the purpose is to standardize a narrow use case, but to also support all of the features that the stakeholders wanted.
Another commenter suggested Diffie-Hellman key exchange which I think is not a bad idea as a third alternative if you want to establish sessions. Diffie-Hellman used in every https connection to establish a session key. In https the session key is used for symmetric encryption of all subsequent traffic over that connection. But the session key doesn’t have to be an encryption key - you could use the key exchange to establish a session password. You could use that temporary password to authenticate all requests in that session. I do know of an intro video for Diffie-Hellman: https://youtu.be/Ex_ObHVftDg
The first two options I suggested require the server to have user public keys for each account. The Diffie-Hellman option also requires users to have the server’s public key available. An advantage is that Diffie-Hellman authenticates both parties to each other so users know they can trust the server. But if your server uses https you’ll get server authentication anyway during the connection key exchange. And the Diffie-Hellman session password needs an encrypted connection to be secure. The JWT option would probably also need an encrypted connection.
Always a good one! It seems overdue for an update to include Clojure, Go, Rust, Typescript, Swift, and Zig.
This seems like a restatement of X. We still don’t understand Y. I’m especially confused about:
There was some hint that maybe you’re concerned about reproducibility for CIDs? If you fix the block size, hash algorithm, and content codec you’ll get consistent results. SHA-256 also breaks data into chunks of 64 bytes as it happens.
Anyway Wikipedia has a list of content-addressable store implementations. A couple that stand out to me are git and git-annex.
I’ve mainly worked as an employee so I don’t have as much experience with freelance gigs. But nearly every job I’ve had in 18 years has been through networking. Organizing and speaking at programming meetups opened a lot of doors for me. It gets a lot of attention on me while I get a chance to present myself as an expert.
Eventually I’d worked with enough people that when I’ve been looking for work I find I know people who’ve moved to new companies that are hiring.
We live in a capitalist society. Most of Typst is open source including the CLI, library, and IDE support; and the source is Rust so why not share in a Rust community?
At this point I don’t know what the difference is between waylandFull
and the other wine packages. Last I checked waylandFull
pointed to a much older wine version. But I see that’s just changed. Since Wayland is not in Wine’s main branch my guess is there’s no need for a Wayland-specific package.
When I was working on this yesterday (I think) only the staging and unstable wine packages were on Wine 10. But yes, it looks like today all of the Wine packages in NixOS unstable are updated to Wine 10 so you could use wineWowPackages.stableFull
, or whatever you want.
I’m sorry about the Bottles issue! I was using Bottles, but I couldn’t figure out how to get a Wine 10 runner, or how to get it to use the system Wine which is why I went to Lutris.
Good point! There is also a spot you can set this in game settings in Lutris.
I haven’t tried this in Steam. Steam still requires X11. Can Steam spawn a native Wayland window if it’s running in xwayland? I assumed not which is why I went through Lutris. But if it can that would be great!
Edit: Oh yeah, Steam can launch native Wayland games! Now that I think about it that makes sense - Steam spawns a sub-process that manages its own windows so the sub-process doesn’t get stuck in X11 land. This is great! I thought I was going to have to wait forever for Valve to release a Steam update with Wayland support!
I think the Proton options that Steam provides are not updated to Wine 10 yet so they won’t run in Wayland without the special registry setting that the previous Wine version requires? I tried hacking in a custom compatibility runner that runs wine from the Nix package, but I got a message saying that a running instance of steam could not be found. But I was able to get a runner that works from wine-tkg-git by following instructions here.
proton-tkg-build
~/.steam/root/compatibilitytools.d/
I’ve come around to doing it this way too. systems
is not automatically supplied as a flake input - you can get such an input like this:
inputs = {
systems.url = "github:nix-systems/default";
# ...
};
outputs = { self, nixpkgs, systems }:
let
eachSystem = f: lib.genAttrs (import systems) (system: f nixpkgs.legacyPackages.${system});
in
{
# ...
}
The handy thing about importing another flake to get a list of four strings is that anyone consuming your flake can override that input in case they want to use a system that isn’t included in your original flake. There is more information at https://github.com/nix-systems/nix-systems
I’ve been using nushell as my shell for a long while. Completions are not as polished as zsh - both the published completions for each program, and the UX for accepting completions. But you get some nice things in exchange.
I LOVE using nushell for scripting! CLI option parsing and autocompletions are nicely built into the function syntax. You don’t have to use the shell for this: you can write standalone scripts, and I do that sometimes. But if you don’t use it as your shell you don’t get the automatic completions.
Circling back to my first point, writing your own completions is very easy if you don’t like the options that are out there. You write a function with the same name as the program you want completions for, use the built-in completions feature, and it’s done.
I’m impressed by the Kanban system you’ve set up there! Your backlog looks better groomed than any Kanban board I recall seeing.
I just play the same handful of games year after year so there’s not much to organize.
The images probably don’t have to look meaningful as long as it is difficult to distinguish them from real images using a fast, statistical test. Nepenthes uses Markov chains to generate nonsense text that statistically resembles real content, which is a lot cheaper than LLM generation. Maybe Markov chains would also work to generate images? A chain could generate each pixel by based on the previous pixel, or based on neighbors, or some such thing.