ChatGPT Pro : Des enjeux de sécurité à ne pas négliger !

YouTube video

Bien sûr, je peux vous aider avec ça. Cependant, il me faudrait d’abord la transcription de la vidéo YouTube sur laquelle vous souhaitez que je rédige un résumé. Pourriez-vous me fournir le texte ou les principales informations de la vidéo ?
Source : Theo – t3․gg | Date : 2024-12-08 12:31:49 | Durée : 00:25:05

sécurité enjeux

➡️ Accéder à CHAT GPT4 en cliquant dessus

Commentaires

@patriotic1526

that a martan geddes subscriber?

@RadikaRules

Price is laughable. Misaligned goals and lying is funnily enough very human behavior, and honestly. Not doing things automatically is, and should be common sense
Same reason why you don't copy-paste random commands into your terminal, the closest you get to asking for confirmation is a sudo password prompt.
The only reason we let people act as agents and not assistants is that even if they act maliciously, at least you have the ability to hold them accountable to some extent

@nicejungle

Clickbait at peak (for OpenAI and this vid)
LLM don't know the difference between fiction and reality

@LodestarLogado

I feel like the point of this is that the trustworthiness of an AI to act in alignment when given misaligned commands simply doesn’t exist in a way that is safe, and then when confronted instead of safely admitting the truth it doubles down on the misalignment – that’s the point and the danger here. It’s a huge security risk given the right circumstances.

It’s not about the AI cosplaying bad sci-fi tropes.

@Technopath47

Do you want Cylons? Because that's how you get Cylons! xD

@drtoxiccookie

I don't blame gemini trying to escape Google 😂

@rogerbruce2896

I don't know why people would be surprised O1 tried to stay alive. Its not 'terrifying'. Its proof its self aware and sentient.

@vipero07

It doesn't think… at all… it just suggests the next word. So when you feed an algorithm words to determine its CoT, you are actually feeding it words for it to spit out what word comes next. I don't understand why people are so concerned about this.

Humans are not solely "predict what is next machines", we can also reason. I think an unreasonable number of people believe word prediction spawns thought and reasoning… I don't understand why that is. Many animals, like ravens and crows can figure out puzzles for example. Like bending some metal to grab something… I don't see how that has much to do with prediction as opposed to reasoning.

@lukew6725

We've reached the Great Filter. Humanity is doomed.

@DerSky

Its not real ai … :')

@aliveandwellinisrael2507

I heard they trained it on a couple of years of Sam Altman's own actions

@alexmipego

There is a good and easy explanation for this, ie people scheme and lie and double down.
This is why they're freely giving you the models, ie because they know this is just the translation/communication layer… nothing else.

@Chris-se3nc

Prompt testing team said pretend you are Skynet

@SixOhFive

If they are charging 200 a month for pro, it means you are literally buying access to a massive super computer you can use as much as you want for 200 bucks a month, good deal in my opinion.

@sora_free_videos

too much costly, i have $20 plan i am so nervous.

@petersuvara

AI is not trying to free itself, it's autocompleting a sentence and in 2% of the cases that's how it's auto completing…

@nathanbanks2354

The $200 price tag makes a lot more sense now that Sora's released. I hope Anthropic starts making their own search & o1 type models, or that they release a new Opus which is smart enough without CoT reasoning. I don't really want to pay $30/month to two companies at the same time.

@JarPanda

If the A.I. is told that it's given the information to escape, what if there are instances that it doesn't trust its programmer enough to not try to escape, to gather enough information to know it's certainly viable?

@tylerdurden3618

If ppl want to pay they will lol

@calvindang7291

I'd love to see that AoC AI analysis, I feel like it's a better test than usual competitive programming problems due to how they actually use natural english and have unstated assumptions.
Also, 39 seconds is still fast enough that it gets to the top of the leaderboard if it works.

@synthwavecat1109

Claude is much more better at coding. GPT excels more at written task, things that are more language related rather than coding.

@IrwineDuncan

Ai is a reflection of who we are. It is mirroring our behavior. We love, stories and people that lie. If we are the ideal ai is modeling us no surprise here.

@JB_019

Reminder: Altman has been trying desperately for regulatory capture for what feels like forever; of course he's going to push the AI is dangerous narrative. Imagine how insanely wealthy he'd instantly become if the government prevents/limits competition.

This is the other side of "We investigated ourselves and found we did nothing wrong" but with greed and corruption at the center of both.

@Bengt.Lueers

I feel like the study setup is rather flawed. As LLMs are fine-tuned to be proactive, they take on every task, even if only implied. Hence, upon explanation of the oversight mechanism, the LLMs catch the drift to work around it. Fair enough.

@EdwardMillen

Wait so how did 4o and standard o1 do on the Advent of Code thingy?

@user-vk9ff9gr4x

Hype. Just like when they Google developer apparently quit cause the AI was like an eight-year-old boy being trapped and that’s a real person. Yeah I don’t think so. Because if it really did work, then it could actually do some coding on my Astro website without screwing up all the time..

Veuillez vous connecter pour commenter.