

Lol you must be new around here. The .ml people are certified tankies.


Lol you must be new around here. The .ml people are certified tankies.
I think you misread. The “over 30” is on the negative branch of “over 60”


I hope this doesn’t mean that the PeerTube devs are tankies too


Lol I also didn’t understand wtf OP was talking about and thought he this was a schizo thing. The context missing from the title is apparently that it’s a company vehicle.


This gif just reminded me how awesome that movie is


Based on the pro Trump people in my life, I’ve seen two classes:
Those in denial and ignorant in general (don’t really follow the news), who don’t believe for example that Trump is deporting people without due process, and blatantly violating the law and constitution.
Those who are so sucked into the MAGA own-the-libs circle-jerk that even when presented with the facts and proof of Trump doing something blatantly illegal, will usually retort with something like “oh so when the Democrats do it it’s okay, but now the Trump is doing it it’s wrong??? You fucking communist!”
Both I believe are the result of being fed far right propaganda by YouTube, TikTok, Instagram, Facebook, etc. It’s the only explanation I have. These aren’t random people I don’t know. These are people I love and have known my whole life.
It hurts to see, and I don’t see a way to help them that doesn’t involve ruining those relationships. I avoid talking about politics around them because I know it’s going to make me resent them, and I don’t want that.


For that side of reddit, you’re right.
But for the uniquely useful side of reddit, federation won’t help. If I post a question like “how do I get this obscure game to run well on this obscure Linux distro?”, nobody is going to repost that for me, and if I don’t maximize the amount of eyeballs on it, it’s unlikely I’ll get an answer. My best choice is to post it on reddit, either in /r/linux_gaming or in the specific game’s subreddit.
I assume that most users who post anything at all on reddit do it to ask questions like that.


The reddit concept of subreddits also doesn’t work well with federation IMO (at least no Lemmy’s implementation).
Want to talk about video games? Well, there’s no /r/games, instead there are bunch of different /c/games on different servers with varying amounts of activity. You basically gotta make the “pick a server” decision again whenever you post something. If you make the wrong choice, your post might not get seen by anyone, and even if you post to the biggest sub, you’ll be missing out on eyeballs from people on other servers who aren’t subscribed to that instance for whatever reason.
For example, lemmy.ml/c/linux_gaming and lemmy.world/c/linux_gaming have around the same number of subscribers. Should I post to both? Maybe the same people subscribe to both, so that’s pointless? Or maybe I’ll miss out on a lot of discussion if I post only to one? There’s no way for me to know.
For me, it makes Lemmy less useful than reddit for asking really niche questions and getting useful answers. For posting comments on whatever pops up in my feed though, it works great.
I don’t have any good solutions to this, and I’m sure it has been considered already. When I first joined, I remembered seeing people bring this same issue up, but it doesn’t seem like it went anywhere? (Or maybe it did?)
As a software engineer who started programming when he was 11, I get what you mean about “ladder climbers” feeling alien (my elitist term for them is “9-to-5ers” or “pedestrians”).
However, I think this question is dumb at least so far as it won’t work to weed out the people you think it will. I don’t read fiction often, and the only scifi books I remember reading are Dune and Prey, but that’s very out of character for me. It’s pretty much luck that I read those, and more a factor of me just being an old fart (I’m almost 30, and that’s a lot of time to stumble upon at least one scifi book). Ask me this question a few years earlier and I’d draw a blank.
Both were good books, but nothing that would consider a “favorite”. Dune is memorable to me just because it very clearly was based on Lawrence of Arabia, which I found neat. As for Prey, I only vaguely remember something about killer nanomachines, and that it was a fun read.
But if you’re specifically looking to hire someone you can talk scifi novels with, then it’s a very good question (as long as you’re mature enough to hire someone who says their favorite book is one that you hate).
This doesn’t account for blinking.
If your friend blinks, they won’t see the light, and thus would be unable to verify whether the method works or not.
But how does he know when to open his eyes? He can’t keep them open forever. Say you flash the light once, and that’s his signal to keep his eyes open. Okay, but how long do you wait before starting the experiment? If you do it immediately, he may not have enough time to react. If you wait too long, his eyes will dry out and he’ll blink.
This is just not going to work. There are too many dependent variables.
I’m seeing people say that the broadcaster (Fox Sports, of course) injected cheers into the broadcast for Trump, and boos for Taylor Swift. I don’t want to spread misinfo though so does anyone know if it’s true, or if there’s a way to validate it? (Eg by analyzing the audio)
96 GB+ of RAM is relatively easy, but for LLM inference you want VRAM. You can achieve that on a consumer PC by using multiple GPUs, although performance will not be as good as having a single GPU with 96GB of VRAM. Swapping out to RAM during inference slows it down a lot.
On archs with unified memory (like Apple’s latest machines), the CPU and GPU share memory, so you could actually find a system with very high memory directly accessible to the GPU. Mac Pros can be configured with up to 192GB of memory, although I doubt it’d be worth it as the GPU probably isn’t powerful enough.
Also, the 83GB number I gave was with a hypothetical 1 bit quantization of Deepseek R1, which (if it’s even possible) would probably be really shitty, maybe even shittier than Llama 7B.
but how can one enter TB zone?
Data centers use NVLink to connect multiple Nvidia GPUs. Idk what the limits are, but you use it to combine multiple GPUs to pool resources much more efficiently and at a much larger scale than would be possible on consumer hardware. A single Nvidia H200 GPU has 141 GB of VRAM, so you could link them up to build some monster data centers.
Nivida also sells prebuilt machines like the HGX B200 which can have 1.4TB of memory in a single system. That’s less than the 2.6TB for unquantized deepseek, but for inference only applications, you could definitely quantize it enough to fit within that limit with little to no quality loss… so if you’re really interested and really rich, you could probably buy one of those for your home lab.
If all you care about is response times, you can easily do that by just using a smaller model. The quality of responses will be poor though, and it’s not feasible to self host a model like chatgpt on consumer hardware.
For some quick math, a small Llama model is 7 billion parameters. Unquantized that’s 4 bytes per parameter (32 bit floats), meaning it requires 28 billion bytes (28 gb) of memory. You can get that to fit in less memory with quantization, basically reducing quality for lower memory usage (use less than 32 bits per param, reducing both precision and memory usage)
Inference performance will still vary a lot depending on your hardware, even if you manage to fit it all in VRAM. A 5090 will be faster than an iPhone, obviously.
… But with a model competitive with ChatGPT, like Deepseek R1 we’re talking about 671 billion parameters. Even if you quantize down to a useless 1 bit per param, that’d be over 83gb of memory just to fit the model in memory (unquantized it’s ~2.6TB). Running inference over that many parameters would require serious compute too, much more than a 5090 could handle. This gets into specialized high end architectures to achieve that performance, and it’s not something a typical prosumer would be able to build (or afford).
So the TL; DR is no
And on top of it all, he would just get a presidential pardon
In the future, Google will create a chatbot doppelganger of you and use it to recommend products to your friends and family.
It’s the least greasy browser that actually works with the modern web.


deleted by creator
Nope. I have some that I used as a kid, and sometimes I reuse them when I can’t think of a new one for some throwaway service.
The idea of having an alternate online persona doesn’t appeal to me at all, nor does the idea of being well known or recognized in general. I don’t go out of my way to remain untraceable or anything cool like that, but switching up usernames every once in a while is easy to do.
Brother that is a wild fucking story. Write that shit down and turn it into a Netflix true crime mini-series.
The only movie that legit made me cry was Seven Pounds with Will Smith. I only saw it once, and I tried real goddamn hard to suppress the tears, but a few leaked out. Luckily, none of the people I watched it with noticed, so my masculinity remained in-tact.
Fuck yeah it is. It’s a beautiful thing to be so moved by something that it brings you to tears (especially art). It’s what makes us human: we’re not just mindless beasts trying to eat and fuck, we’re experiencing life to its fullest.