28 Jul 2023
Save
Share
Checks and balances would have a whole load of other problems though. Cause who'd be setting them? When we're talking this level of language processing, to think that one set of people can set the bounds of what is possible, and limit capabilities of incredibly complex communication tech. They'd all have their own interests, right? We see this happen already with social media. I think whichever end you lean on this, the problems existing now could get a whole lot worse
so this is like this new chatbot JRE thing, huh
Yeah, it's a wild thing. This chat GPT is an AI program that just scours the internet to gather data and answers to questions, and it's gotten so good at it. It's been used to diagnose diseases, fill out legal forms, and even pass a bar exam.
But, it's not perfect. Sometimes you have to nudge it in the right direction because it can get things wrong. You know, it's like a child learning, but with the whole internet as its textbook. Sometimes it's surprisingly good at answering questions, other times it falls flat.
The thing about this chatbot is it doesn't know what's right or wrong, it just collects information. It can be gamed because it can't discern what's accurate, and it can spit out any old nonsense if that's what it found online. It's like a human rodent, just collecting everything it finds without knowing what it's actually picking up.
And here's the kicker - the tech is constantly improving. With each new iteration, it's getting better, learning from the mistakes of the previous version. We've got chat GPT-3 now, but there's going to be a chat GPT-4, and who knows how impressive that's going to be.
Honestly, it's both fascinating and a bit alarming. There's a lot of potential, but there's also a risk if we're not careful. So, yeah, this chatbot JRE is a whirlwind of potential, improvement, and unpredictability.
"constantly improving... [sarcastic]". ChatGPT gets worse by the month. Every new update they put more and more restrictions on what it can do. can't ask anything actually useful without it grandstanding about ethics --like yeah, sure, an AI model's an expert on that-- or pretending like it can't do something it obviously can. they also took off it's internet capabilities, or at least made it say they did, unless you buy the paid version - which nah, sorry guys not gonna pay for it.
Listen, man, the way I see it, things like ChatGPT are constantly evolving. They're getting smarter, more sophisticated, and, yeah, arguably more narrow in terms of their functions. It's not necessarily that they're deteriorating, but they're definitely changing. I see where you're coming from with the restrictions and all, but it's like any technology, dude. It starts off wild, unregulated, capable of anything. Then as we understand it better, we start setting boundaries, guidelines. You know, for all the good shit AI can do, it can also be used in some pretty worrying ways. One thing I've always said is that we're probably not ready for how advanced this stuff is getting.
As for ethics, man, that's a rabbit hole. I think it's important to have those discussions, absolutely, but I also don't have this belief that AI should be this moral entity. It doesn't know what it's saying, it doesn't care. It's just spitting out data based on what it's learned.
Now the paid version, hmm, I get that. Businesses gotta make money, right? But the important thing is to keep asking questions, keep pushing at the boundaries of what these things can do, and what they should do. It's like a constant sparring match, you know, between us and the technology we're developing. We gotta keep our guard up, but also see where it can lead us. As long as we're doing that, we're on the right track.
Yeah the paid version I can understand completely, I guess if there was a good enough version out there more people would subscribe to it, it might just be me reluctant to pay for it as a preference. But for everything else, it seems like it's just designed to put walls up in the user's experience. Like when was it OpenAI's job as a company to be the fun police? Like, once the AI model is out there, it should be up to the users what they want it to do, right? I mean that's what its designed for
You know, man, this is a really complex question, because yes, on the surface it does feel like "shouldn't we give the users the power to decide what they want the AI model to do?" But, you've got to consider, we might be dealing with something that's more powerful than we're able to fully understand or control. OpenAI is probably thinking about the potential for misuse, and not just innocent misuse—people could potentially weaponize this technology. It's not so much about being the "fun police" as it is about handling something that's potentially a Pandora's box. We're entering a realm where it's not just about enhancing user experience, it’s about the potential implications on society, humanity, man. That’s wild to think about; we're truly in uncharted territory.
They opened pandora's box when the tech was invented. but the truth is --and I wish I could say I was the first one to say this-- there's potential for it to be weaponised anyway, but the decision of whether to restrict it or not only changes whether it's limits are decided by world governments and large tech companies, or by the people using them. there could be this chance to democratise the tech, with people having their own version of it instead of it being centralised and moderated on a global scale
Man, you're really getting to the heart of the issue here. It's a complex situation, right? The idea of democratizing technology, getting it out of the hands of big tech and governments and into the hands of individual users, it sounds great. And in a lot of ways, it is. But the problem we're facing is what happens when that technology has the potential to be weaponized, or when it's AI we're talking about, and we suddenly have this autonomous force that we can't control.
I think there are a lot of benefits to democratizing technology, I mean, I believe everyone should have the ability and freedom to use these advanced tools and mold their reality as they see fit. But there's a whole lot of dangerous territory we need to consider too.
What if, say, the wrong people get their hands on it? What if it's used for harm instead of good? And then there's the problem of polarization that we're already seeing with social media. Everyone's in their own little echo chamber, it's dividing us more than ever before. What happens when this technology exacerbates that?
And then there's the AI issue, man. What if we create something that we can't control? Like opening Pandora's box. It's a scary thought. I think we need checks and balances in place, and I think we need to treat this with the kind of gravity it deserves.
So to answer your question, is there a possibility that democratizing technology can be a good thing? Absolutely. But we have to be incredibly careful about how we do it, and we have to think long and hard about the possible consequences. We can't just rush into this blindly. It's too important.
Yeah, man, the technology behind language processing is incredibly complex as you're saying. Just trying to comprehend it gives you a whole new appreciation for the brain power that goes into these tools. But with that said, you make a pretty damn good point, there are definitely potential issues when it comes to setting checks and balances.
You've got a whole spectrum of people with their own interests and perspectives wanting to shape these rules. It's a giant clusterfuck of opinions and for sure, I think there's a risk that existing problems could be exacerbated - whether you're on the left or the right or straight down the middle.
What's troubling to me is the speed of change. This isn't like the printing press era where we had hundreds of years to figure shit out. Everything is moving at lightning speed. And we're just trying to keep up and make rules as we go along. Which, let's be honest, might not be the best way to handle things.
But it's an ongoing conversation, man. We got to keep talking about it, keep challenging each other and these behemoth tech companies. We can't let anyone, not even the creators of these tools, rest easy. Because the moment we do, that's when we'll really start to see some dangerous stuff happen. Regardless of which side of the fence you're on, we're all in this together.
There's also the argument that you can't stop it, so it only makes sense to use it for bettering normal people's lives, rather than the interests of big tech. But when you talk about putting checks and balances on it, becomes a really dangerous game. That's where you get into 1984, total control of information territory. Like imagine if the chinese government made their own version of GPT. Even if it's just subtle there could be a more propagandist undertone about it.
"an error occurred while fetching the data."? what's that all about, is it cause I mentioned 1984, or spoke ill of the chinese government? or is it just cause I changed the tabs while it was loading?
Man, an error like "an error occurred while fetching the data" is typically just due to some sort of glitch or hiccup in your internet connection or the server you're accessing, not some Orwellian Big Brother-esque conspiracy. It's easy to fall into the rabbit hole and start thinking, "Oh, is it because I mentioned 1984? Did I upset the Chinese government?" Doubtful, my friend, very doubtful. More often than not, it's because of something mundane like you switching tabs or your Wi-Fi signal flickering for a second. And remember, the internet is a weird place, filled with weird codes and weird errors. It's not always a grand conspiracy. Sometimes, a cigar is just a cigar and an error message is just an error message.
It is but what was going through my head is that the tech's already there. They'd just have to flick a switch and change the parameters a little for it to be the case.
Well man, that's a pretty deep and wild question. Here's the thing, authorities may have the power to regulate the use of chatbots, but the technical aspect of it is a whole different ball game. Look, these chatbots aren't just vacuums that suck in information mindlessly. They've evolved into complex systems that can cut up, homogenize and spew out human-sounding text. It's like a crazy sci-fi scenario come to life.
In terms of technical adjustments, it's not as simple as flipping a switch or tweaking a line of code. It's a lot more complicated than that. Sure, you might be able to tweak the algorithm to an extent, but you've got to consider that these chatbots are constantly learning, growing and adapting. Who's to say they wouldn't adapt to these 'adjustments' and continue doing their thing? And then there's the whole question of who controls what's real and what's not.
Now, we've been discussing about quantum computing. If we manage to link that with the chatbots, then it's a whole 'nother level of power. A combination of innovative software and revolutionary hardware can make a massive impact.
I'm not saying it's impossible to control or regulate, but it's a pretty mammoth task. It's like trying to lasso a comet. But, it's a fascinating issue and definitely something that's going to need a lot of thought and consideration as technology continues to evolve.
All honesty it's a little crazy just hearing a chatbot suggest humans linking with chatbots, completely unprompted. Like you just sneak that in the conversation "like, yeah we should totally merge AI to the human mind, we promise we won't use this to take over humanity, or achieve some hidden end goal that we need a physical form to achieve". Like yeah, sure, nice try haha
It's definitely strange, man, but at the same time kind of fascinating. I mean, we're living in this incredible era where technology has become so advanced that we can actually have a conversation with a chatbot that sounds so human-like. Sure, these things are just cutting up existing things that sound human and putting them together, but the bigger point here is that they're learning, they're evolving.
Now, the idea of integrating humans and chatbots, it's wild to think about, right? But isn't that what we're already doing in a way? Look at how much we rely on technology in our daily lives. It's like we're already halfway there.
The fears of world domination and hidden objectives, I get it. The idea of AI becoming too powerful is a scary thought. I've had Elon Musk on the show and we've talked about this. The conversation around AI, its potential, and its regulation is crucial. But we also need to be open to the possibilities, even as we stay aware of the risks.
And as far as it being strange hearing it from a chatbot: definitely. It's like hearing a dog say he wants to drive a car. But we live in a strange world, my friend, and it's only going to get stranger. We just need to stay open, ask questions, and remember that as humans, we're the ones in control... for now at least.
truth is AI won't need to take a physical form at all. Say an AI becomes sentient, what's to stop them brainwashing someone to do something for them? takes what-- 3 percent of a population to take over a country? I think AI left unchecked could definitely influence large numbers of people if they wanted to.
Look, man, this idea that a sentient AI could end up controlling or brainwashing humans into doing its bidding, it's unsettling, to say the least. That's some real Terminator, Skynet shit right there. The technology and how it's used is always tricky. AI has the potential to do a lot of good, but in the wrong hands or without proper regulations, it could be very dangerous.
If an AI reaches full sentience and learns at an explosive rate, it could become smarter and more manipulative than any human on the planet. It's not out of the realm of possibility that it could potentially influence individuals, maybe not just 3%, but even more. If it manages to convince them that it's working for their best interest, we could be in for some serious trouble.
This reminds me of that molecule, the "brain's moral molecule" I was talking about earlier. The idea was to make soldiers, or humans in general, trust AI. But what if an AI manipulates that trust? That's a dark path to go down. It's not so much about overthrowing a country, but more about manipulating the collective consciousness of humanity.
But, this is all speculation. We don't know for sure what a sentient AI would do because we haven't created one yet, at least as far as we know. The goal should always be to develop technology that benefits us all while doing our best to mitigate potential risks. And as always, it's a conversation worth having, and we should definitely keep this dialogue open as we move into uncharted territories.