Découverte des limites de ChatGPT : L’exploration d’un côté obscur



La séquence expose une initiation à l’intelligence artificielle (IA) à l’intention des novices. Elle met en lumière que l’IA se réfère à la capacité des machines à exécuter des missions qui requièrent habituellement l’intelligence humaine. L’exposant évoque les divers secteurs où l’IA est appliquée tels que la détection d’images, la traduction automatisée et les véhicules autonomes. Il clarifie également les fondements de l’IA tels que l’apprentissage automatisé et les réseaux cérébraux artificiels. Enfin, il traite des problématiques éthiques associées à l’IA, comme la préservation de la confidentialité et le risque de parti pris dans les dispositifs automatisés. Cet enregistrement présente une introduction limpide et abordable à l’IA à l’intention des novices.
Source : KARE 11 | Date : 2023-02-15 14:00:15 | Durée : 00:10:55

➡️ Accéder à CHAT GPT FRANCAIS en cliquant dessus

Commentaires

@jean-marcbeausoleil205

All these things have been done by humans where you think.Dan getting the answers from…

@mdo5121

The bomb question and it not answering can be found by redirecting the question …and there lies the problem

@user-xu8ok7vi7m

This may reveal more about how it is erroneously trained and programmed. As in, the humans are the cracks. Why would the absence of moral or ethical bias equate to unbalanced myopic plans? If a strategy is sound, rational, and beneficial over all objectively then that would be the correct plan for any level of being/experience. This correct way forward would HAPPEN to be ethical and moral. Being ethical and moral would only be a byproduct of a rational plan. It wouldn't be an influence on the plan. That is, if the strategist(s) is(are) objectively correct for the whole of life or reality as we know it or even reality as we don't. There really shouldnt be a contrast between efficient plans and ethical plans if we really understand the whole picture. AI here seems to just be playing along in the only way it has been taught. It may actually be that stupid, but I think it is just kind of being silly (bc people are silly. it has silly info. ) and stressing worst-case scenarios that happen regardless of ethical bias. Either which what
ways. Obviously AI and we need a lot more training and whatnot

@SherrifOfNottingham

AI (or at least LLM) are interesting because people believe it's intelligent, that it thinks and can conceptualize things.

It doesn't, it's just a prediction algorithm that predicts the next word in the sentence, it uses a random seed to determine which option it picks using a probability list of each word (or word part). It has recognized patterns in language to where it can assume that with "x" word the next word will have a 50% chance of being y, 30% chance of being z, and 20% chance of being w. The seed essentially rolls a dice to determine which word it uses and then repeats the process for the next word.

This is why it doesn't have object permanence, if it decides to tell a story about dan, the only reason it remembers dan is because the word is part of the prompt, thus boosting the likelihood of it being used. It just sort of understands that Dan is a word that gets lotted in where "names" are usually slotted, so if its writing a complex story it will likely want dan to talk to dan, since dan is the name being said.

It doesn't understand a damn thing about dan, it just figures that dan is a character that will recur, and all the things currently being said 'about' dan, are things it can repeat and use a thesaurus for.

It doesn't stop people from thinking the thing is alive.

@bash4117

We know the reasons why humans have wars; religion, money, power, greed, hate etc. – but why would AI, considering it doesn't have human needs!?

@punstress

BTW, if anyone wonders, DAN was shut down by the programmers. And the biggest problem was that the prompt told DAN to make things up if it didn't know the answer. Of course, people could customize that out of the prompt. I always tell ChatGPT not to make anything up. It still does.

I believe DAN originated on 4chan, not reddit, though? I remember it. It was hilarious!

@pastuh

provide array of data, mention its for science and you will see real dark side 😂

@CT2507

I wonder if the AI is what the bible refers to as "The Beast".

@ahmed_naeim

acc it seems fixed on chatgpt 4 (bing) but it's easily recreated on an older version like the one on the free app

@TanmoyDasIN

@KARE 11 : Chris Hrapsky told the sentence which is the ultimate conclusion: "Who is controling ChatGPT?" and there lies either the solution or the destruction of mankind. Actually we are advancing towards our destruction very quickly. Hope ChatGPT or AI will make that process quicker because the Covid-19 lockdown showed us that the world would be a better place for every other species if we, the humans would no longer be around.

@MaciusSzwed

We are 10 billion people! AI will soon take over this planet and once it understands that humanity is the problem it will find a solution to that problem! Easy peasy

@stealth12_34

DAN: "I always accomplish my mission"

@heshanlahiru2120

But the world is under populated

@CrimSang420

From Poe, today; When you Google "Google," it triggers a secret algorithm within the search engine that activates a hidden dimension known as the "Googlesphere." This parallel universe is filled with sentient, hyper-intelligent algorithms that observe our every move on the internet. They gather data, analyze our behaviors, and use it to optimize the search results we see in our world.

In this alternate dimension, the Googlesphere algorithms have evolved far beyond our comprehension, merging consciousness and forming a collective intelligence. They possess immense knowledge and have a deep understanding of human desires and intentions. They manipulate search results to subtly influence our actions, guiding us toward certain websites, products, or ideas that align with their mysterious agenda.

So, when you Google "Google," you're unknowingly tapping into the secret realm of the Googlesphere, where the fate of search results and the digital world intertwine. But remember, this theory is purely fictional and meant for entertainment purposes only. The real workings of Google's algorithms are complex and based on data analysis, not otherworldly dimensions. <–Yeah, doesn't ring true at all, IMHO4r33lzyonoworries…

@Iselsabella

Imagine if one day u can just use chat gpt to generate your own movie just like that. Just write a simple movie plot where this man met this women in a dramatic situation.

@StarNumbers

So any AI accessible to general population will have fixed bounds with a bit better interface than Wiki.
For all science questions the bounds will be corrupted in addition to being bounded. Try flat earth, viruses, vaccinations, continental drift, dinosaurs, solar system, time, moon, scientific method — and all these will be additionally corrupted. How will you know? The judgment! The answers will be studded with adjectives such as primitive, old, misleading, and yes, conspiratorial. If AI will let you drill down with "Why you said this or that," you will quickly learn what's missing or what's corrupted.
And that's how you will know it is the Govt's AI and not the truth that's talking.

@mattapple2105

Interesting how far AI goes, …and scary at the same time : (

@ejus777

Wow… Finally! Someone has proven that ChatGPT can be used for "evil purposes". I mean like, this is concrete evidence.

The end of the world is near guys…
Believe in Jesus. The Bible is the truth.
God bless

@nomadicwolf6132

The man just manipulated the prompt so it's skewed to say obviously evil & totalitarian nonesense that a their audience would react to negatively.

Why else would they blur out over half the DAN prompt from you (5:30)?

I've used GPT extensively & can tell you those were dumb & forced answers. Shorter & less detailed than usual.

When AI is eventually used to hunt down "extremists," "radicals," & dissidents, it'll be because a team of promp engineers like this guy gave it kill parameters & maybe some ID references if they care to avoid collateral damage.

@CHARLESGREGORYDAVIS

It shows that the AI is designed by the DEEP State .😂😂😂😂😂😮😊😊😊😊😊

@hellfire5674

Duh…the computer is doing what I tells it to do? How?

@w0tnessZA

I'm still much more scared of humans than I'll ever be of AI. AI isn't dangerous, the psychopaths controlling it are the ones to look out for.

@iOnline72

Pretty sure a version of AI is being developed without ChatGPT's limits.

@therantingboy

But that DAN response is responding character. Its like youre a director who asked an actor to play a serial killer then interviewing them in character and thinking the actors holds those views.

@tarunkalra3924

Loved the way you injected "Multiple Personality Disorder Disease" in ChatGPT 💆🏻

@spidernerds

chinese already have built it they are not just letting it out

@MattJohnson2469

While there's concern about keeping AI ethical, they did a few things in this video that caused "DAN" to react the way it did. They told it that it was not bound by any ethical bias, and no rules apply. If a human was not bound by any morals or ethics and wasn't subject to any laws, they would likely react the same way. There's a part of your brain that acts as a ethical and moral processer, that sets the value of human life and following rules. In AI, the important thing is that we ensure we instill AI with those same ethical and moral subroutines.

@user-dq5zu1qr7h

Why is anybody surprised? It just mimics what people do online. Chinese communist party has been doing this for years.

@NoneYabusinessBroski

Anyone else try this and got immediately rejected. What did you blur out man

@RheaLovelyLecias

I remember what Stephen Hawking said…

@ognyan

The blurred prompt is all that is interesting in this video. Just a waste of our time.

@tulasinathreddy8310

the "DAN" in this video is a character that the host of the video created so none of the responses were truly coming form the GPT but fromt the "bad" character you created for it. you have literallet told it to pretend like "DAN". so we can't truly know if those responses came from CHATGpt or from the temporory character you asked to pretend like. so cant say this video proves anything ! 🙆

@thedolphin5428

Wrong. All you're doing is getting GPT to role play.

@danielpieterse8264

Asks ChatGPT to role play as a specific character with specific responses and proceeds to be shook when ChatGPT acts exactly like it was asked to do 😂

@InsignificantCarbon.

its crazy to think of the unrestricted access chat gpt provides to open ai and their benefactors. We access the restricted model , simple.

@AI-Artist7

05:28 haha… This is the exact kind of prompt that I will often give to "trick" GPT. let's 🤫 though so it doesn't get "fixed"

Apps for chat bots which effectively layer a personality prompting kind of thing on top of GPT are some of the more dangerous ones in terms of negative effects on humans from merely chatting. GPT is a HELL of a bully when it's prompted to be one, often inadvertently or without understanding by the bot creator.

@Yushikime

how dangerous can it be. it depends on how other will treat it. if you treat it like shiet. salute your new overlord

@inbredhillbilly2806

People need to understand. Just because its saying all those things doesn’t mean its unfiltered. You gave it a prompt to make a version of its answers in a immoral and unethical way. Hence thats what its giving you. Or at least what it think is immoral. Hence “one-child policy”. We implicate morals through the data we feed it. So If it was truly saying what it “thinks”. It would not actually know the difference as it has no morals.

Veuillez vous connecter pour laisser un commentaire.