Money wins, every time. They’re not concerned with accidentally destroying humanity with an out-of-control and dangerous AI who has decided “humans are the problem.” (I mean, that’s a little sci-fi anyway, an AGI couldn’t “infect” the entire internet as it currently exists.)
However, it’s very clear that the OpenAI board was correct about Sam Altman, with how quickly him and many employees bailed to join Microsoft directly. If he was so concerned with safeguarding AGI, why not spin up a new non-profit.
Oh, right, because that was just Public Relations horseshit to get his company a head-start in the AI space while fear-mongering about what is an unlikely doomsday scenario.
So, let’s review:
-
The fear-mongering about AGI was always just that. How could an intelligence that requires massive amounts of CPU, RAM, and database storage even concievably able to leave the confines of its own computing environment? It’s not like it can “hop” onto a consumer computer with a fraction of the same CPU power and somehow still be able to compute at the same level. AI doesn’t have a “body” and even if it did, it could only affect the world as much as a single body could. All these fears about rogue AGI are total misunderstandings of how computing works.
-
Sam Altman went for fear mongering to temper expectations and to make others fear pursuing AGI themselves. He always knew his end-goal was profit, but like all good modern CEOs, they have to position themselves as somehow caring about humanity when it is clear they could give a living flying fuck about anyone but themselves and how much money they make.
-
Sam Altman talks shit about Elon Musk and how he “wants to save the world, but only if he’s the one who can save it.” I mean, he’s not wrong, but he’s also projecting a lot here. He’s exactly the fucking same, he claimed only he and his non-profit could “safeguard” AGI and here he’s going to work for a private company because hot damn he never actually gave a shit about safeguarding AGI to begin with. He’s a fucking shit slinging hypocrite of the highest order.
-
Last, but certainly not least. Annie Altman, Sam Altman’s younger, lesser-known sister, has held for a long time that she was sexually abused by her brother. All of these rich people are all Jeffrey Epstein levels of fucked up, which is probably part of why the Epstein investigation got shoved under the rug. You’d think a company like Microsoft would already know this or vet this. They do know, they don’t care, and they’ll only give a shit if the news ends up making a stink about it. That’s how corporations work.
So do other Lemmings agree, or have other thoughts on this?
And one final point for the right-wing cranks: Not being able to make an LLM say fucked up racist things isn’t the kind of safeguarding they were ever talking about with AGI, so please stop conflating “safeguarding AGI” with “preventing abusive racist assholes from abusing our service.” They aren’t safeguarding AGI when they prevent you from making GPT-4 spit out racial slurs or other horrible nonsense. They’re safeguarding their service from loser ass chucklefucks like you.
Money wins, every time.
And right there, you answered your own (presumably rhetorical) question.
The money people jumped on AI as soon as they scented the chance of profit, and that’s it. ALL other considerations are now secondary to a handful of psychopaths making as much money as possible.
This was always coming, and we’re going to do fuck all about it. But on the upside, the future is going to be absolutely rad for the .001%
“A handful of psychopaths making as much money as possible”
Capitalism in a nut shell
Unrelated but is your name a reference to Amy Likes Spiders? That was my favorite poem in DDLC.
Probably subconsciously. I came up with the name long after playing the game, but I wasn’t thinking of it when I made it. I actually am just a lady who likes spiders
I love spiders, and lots of bugs really. I have zero respect for people who look down on them when they’re just so damn cute.
Like how can anyone look at this and say anything other than “awww”
awww
Yeah you’re right. Look at that little cutie <3
I use the way people treat other animals, especially ones like bugs and stuff, the ones we barely give a second thought about, as a measure of character. Phobias are one thing, but at least have compassion for this other living thing
Very few will get a chance to feel what it’s like to pet a bug and have it go from fearing for its life to trusting you with its life. They genuinely have no framework for a world that treats them as disposable when you show them compassion, and it’s magical how they react.
Corporations gonna profiteer. Capitalists gonna exploit. “Visionary business leaders” gonna turn out to be dirt bags when you dig into them (Google Annie Altman).
And “we” keep falling for it and putting up with it en masse, unto our collective doom.
(Google Annie Altman).
Cue the “I didn’t even read your fucking post” guy.
I only had a whole ass paragraph dedicated to her.
Hey I am not an AI , I have real feelings, and you hurt them by calling me a looser ass chucklefucks!
looser ass
You might want to go see a doctor about them loose stools!
How could an intelligence that requires massive amounts of CPU, RAM, and database storage even concievably
What you define as “massive” amounts might still be large amounts for most consumers. But even then it’s not… really. Developers frequently fit these models in their own laptops. Some of the ML models fit on an iPhone or Android phone. It can generate ten, or hundreds of words (tokens) per second.
So the fact that they don’t need massive amounts of CPU, RAM, and database storage is rather the point. Imagine if it could escape and multiply. It could conceivably do so quite quickly given current technology.
Zephyr 7b might run on a cell but you don’t understand how far behind oai these are for stuff, their gpt uses multi agent networks too, it certainly requires massive, massive amounts of power. And no, a tiny model on a phon can’t brrrr hundreds of words per second. You are just misinformed somehow. If I tune my computer correct I get like 30. And these are magnitudes behind in quality anyway. How you believe they can replicate is beyond me. Using autogen? I mean we can already make replication softwares, called viruses, but what’s the gain of having a language model as payload for that?
Give it time. Cell phones are getting more powerful every day.
As for misinformed… sure it’s possible. But I doubt it. Llama isn’t chat gpt but it runs pretty well on my machine. Is it perfect? No, of course not. Neither is ChatGPT. But it’s “good enough” for what I need it for, and it certainly could be “good enough” for many other users.
What’s the gain of a LLM for a virus? Well that… is a little more esoteric. It’s about as esoteric as encrypting hard drives. Crypto malware isn’t always a virus either. Imagine a LLM in a virus used to determine if a given file’s content is worth extracting from the device. I haven’t yet figured out all of the side ventures but I can see a use for it.
I don’t get it, you didn’t say “in the future” you said it is that now, it’s the premise of the entire comment. We aren’t in the future. It’s not used in mobile apps that much yet because it’s not at all reliable or fast… Or cheap. It’s incredible technology. But it’s not ready for the things you described
insert spacesuit “always has been” meme here.
Once they saw the big stack of money, they suddenly forgot that OpenAI’s charter specifically mentioned preventing AI to benefits select fews and instead hands over everything to Microsoft on a silver platter.
If they wanted to safeguard AI, they would actually make the models public. Bad actors are bound to get them anyways, hiding it behind secrecy is very unlikely. And I mean, AI could make a virus infecting most infrastructure on planet (Amazon and Google data centres) and then shutting it down or using it for its own purposes. As several programming memes lay out, the entire modern web infrastructure is surprisingly dependent on just a few APIs and tools
AI could make a virus infecting most infrastructure on planet (Amazon and Google data centres)
Most important infrastructure on the planet is air-gapped, meaning it’s not connected to the internet, for good reason. Reasons like this. The thing is, as it stands, a determined human could do this as well with Google and Amazon. Sorry, having a chuckle over here that you’re conflating two cloud hosts with “all the infrustructure on the planet” like irrigation canals out in the boonies are somehow internet connected.
the entire modern web infrastructure is surprisingly dependent on just a few APIs and tools
That doesn’t mean that you can deploy a payload in a reasonable amount of time to every device on the planet. Dude, half the people in third world countries aren’t even connected, and if they are, they’re dealing with like 2G speeds on a cellphone service and they definitely don’t own a computer, they only have a phone. There’s all kinds of speed limitations to the hardware in reality. Just because you might have a fast connection and fast PC doesn’t mean everyone does, and those physical limitations make an rogue AGI “destroying infrastructure” a big of a laugh.
It doesn’t matter if anyone cares about the safety of AGI.
AGI is a direct source of power, much like any weapon. As soon as AGI exists, we will exist in a state of warfare due to the fact that the “big guns” will be out.
I know I’m having trouble articulating this point, but it’s very important to understand. AGI is like a nuclear weapon: once a person has it, it doesn’t matter how much others may want to regulate them. It’s just not possible to regulate.
The ONLY strategy that gives us hope of surviving AGI’s emergence without being enslaved is to spread AGI far and wide to ensure a multipolar AGI ecosystem, which will force AGI to learn prosocial interaction as a means of ensuring its own survival.
And if you want to come at me with “AGI doesn’t inherently have a self interest”, consider that the same is true of nuclear weapons. And yet nuclear weapons get their interests from their wielders. And the only way to stay safe from nuclear weapons is also to proliferate them far and wide so that there is a multipolar ecosystem of nuclear weapons, ensuring those holding nuclear weapons have to play nice to ensure their own survival.
All of this talk about restricting AGI will only have the effect of concentrating it in a few hands, leading to the very nightmare the regulators are trying to avoid.
If the regulators had succeeded, and the US had been the only nation to possess nuclear weapons in the long run, humanity would have suffered massively from that lack of parity. Let me be less coy: humanity would have suffered under the brutality of repeated nuclear holocausts as the interests of the few led to further and further justification of larger and larger strikes.
Nuclear weapons cannot be regulated by law. They can only be regulated by other nuclear weapons. Same is true of AGI.
Okay. It’s not only a weapon though.
It doesn’t need to be only a weapon for any of this to apply. Same as nuclear fission.
“It doesn’t matter if anyone cares about the safety of agi”
It does matter. And it doesn’t apply because it’s not just a weapon. It matters how it acts towards humans ethically in so many ways other than indiscriminate slaughter
I think it will be fine as long as we don’t give the AI thumbs.
removed by mod
How do you know?
removed by mod
Are you able to articulate at least one specific reason that we are nowhere close to developing AGI?
Without any specific reason being stated, I’m tempted to believe you are just confidently declaring this to protect yourself from fear.
I agree we’re far out, but not as far as you think. Advancements are insane and AGI could be here in 5-10 years. The way the industry have been attempting it the past decade is wrong though, training should be more indepth than images/videos, I think a few are starting to understand how to do more indepth training, so even more progress will start soon
I think you are being optimistic.
If you are old enough to remember AIM chatbots, this current generation is maybe multiple times more advanced, not exponentially so. From what I have seen, all the incredible advancements have been in image production.
This leads me to believe that AGI has never been the true commercial goal, but rather an advancement of propaganda media and its creation.
This leads me to believe that AGI has never been the true commercial goal, but rather an advancement of propaganda media and its creation.
Uh what? Why wouldn’t it be because text/image generation isn’t even on the same plane of difficulty as AGI?
I think 5-10 years is optimistic given how much hand tuning / manual training has to take place. Given how insanely long it’s taken to get where we are and how many times I’ve heard machine intelligence oversold, and based on what LLMs can do I think we are still many decades out.
That said, what ML and AI can do is still game changing and will still have an impact even if it isn’t some kind of scary skynet AGI thing.
We’ve been promised self driving cars for over 10 years and still aren’t close, I think we’re a long ways away from AGI.
Self driving cars as an area in which straight AI would probably work very well. I don’t think we need a full-on intelligence to drive around.
Anyway we already do have self-driving cars they’re just not very mainstream yet. Mostly because they’re prohibitively expensive and no one trusts them exactly but that’s more because there’s other idiot humans around than anything else.
Yeah if we were at the point with AGI that we’re at with self driving cars, then AGI would be fully implemented, just still with some safety issues and only in the hands of a few corporations.
That’s hardly “sci fi”. That’s currently existing behind closed doors.
To be fair, that promise came from someone who is clearly a conman of a swindler. If you ever took that promise seriously… I’m sorry.
No I absolutely agree with you, I’ve been skeptical of all the self driving news for years. However, I was using it as a parallel to other AI based discussions. While Elon may have been over hyping what was going to be possible in the near future, there is no evidence that other people aren’t doing the same now.
Just like with autonomous vehicles, we’ve made impressive leaps in what ML can do, but I think there is still a long road ahead.
Entirely agreed, we have such a long path ahead.
Elon/Tesla is far from the only outfit working on self driving. Chevy Cruise is the one that recently dragged a person under the car for dozens of feet.
For sure, but the traditional motor vehicle companies that were dragged kicking and screaming into the EV game were not making the same predictions of how quickly we would get to self-driving. That was pretty much all Elon Musk setting the absurd timelines, and a handful of tech companies who also were pursuing driverless tech. I would say the “serious” car companies never promised that, but maybe I’m wrong and just never saw it.
Imagine if we had FTL, that would be so cool.
Agree. Ever since they started lobbying politicians it’s been clear that “safety” is a just a pretext for regulatory capture.
AI isn’t the danger, it’s human application of “AI” that will be horrible as fuck.
It often seems to be… ‘Gee Brian, that’s a great invention! I wonder how we can kill people with it’, the thought having germinated in a slurry of greed and self interest. (Apologies for the slightly jaundiced view of our betters and elders, it comes with age.)
AGI isn’t real
You mean that physical objects cannot display human level intelligence? That’s obviously untrue, I have about seven billion counterexamples to show you.
What? No. That’s clearly not what they meant. What???
Those are naturally conceived with some good ol’ fucking (or in vitro fertilization), not artificially created with thousands of GPU.
Unless you can actually point to special magic consciousness dust that’s in human brains that doesn’t really make any difference.
Why does consciousness have to be organic based, after all there’s plenty of life on this planet that’s organic and has no consciousness, so can the inverse not be true.
Think of how stupid the average person is, and realize half of them are stupider than that.
-George Carlin
Maybe they’re just bad at math and don’t understand how averages work.
Ewwww. Super icky comment.
Lets be thankful we have commerce, buy more, buy more now and be happy… - Om