Comment utiliser ChatGPT de manière inappropriée dans une affaire juridique
AI : Un chatbot AI est utilisé de manière inappropriée lors d’une affaire judiciaire
L’utilisation de l’intelligence artificielle dans le domaine juridique a récemment été remise en question lors d’une affaire judiciaire à New York. Deux avocats ont utilisé un chatbot AI appelé ChatGPT pour effectuer des recherches juridiques dans une affaire de litige aérien. Malheureusement, leur utilisation de l’IA a eu des conséquences désastreuses et a entraîné une sanction de la part d’un juge fédéral.
L’affaire en question concerne un passager du nom de Roberto Matta, qui prétend avoir été blessé lorsqu’un chariot de service en métal a accidentellement heurté son genou lors d’un vol en provenance de San Salvador. Matta a intenté une action en justice contre la compagnie aérienne Avianca Airlines, alléguant des dommages corporels graves.
La compagnie aérienne a immédiatement transféré l’affaire devant un tribunal fédéral en invoquant la compétence juridique appropriée. Cela est dû au fait que les vols internationaux sont régis par le Traité de Montréal, un accord international auquel le gouvernement américain a adhéré. De plus, les parties impliquées dans cette affaire sont considérées comme étant « diverses », car Matta est citoyen de New York et Avianca a son siège à Bogota, en Colombie. Les tribunaux fédéraux ont compétence pour les affaires internationales et les litiges entre parties de différents États.
Cependant, l’utilisation de l’IA par les avocats de Matta a posé des problèmes majeurs. Le chatbot AI qu’ils ont utilisé, ChatGPT, a fourni des résultats erronés, voire inventés, lors de ses recherches juridiques. Les avocats ont tenté de présenter des arguments contraires à la défense d’Avianca, mais l’utilisation de l’IA a finalement été critiquée par le juge, qui a sanctionné les avocats.
Cette affaire met en évidence les limites actuelles de l’IA dans le domaine juridique. Bien que l’IA puisse être utile pour trier de grandes quantités de documents de découverte, elle n’est pas encore capable de remplacer complètement les avocats dans des tâches plus complexes telles que la recherche juridique.
Il est évident que l’IA est en constante évolution et qu’elle pourrait un jour aider les avocats dans leurs recherches et leurs tâches quotidiennes. Cependant, pour l’instant, les avocats doivent être conscients des limites de l’IA et faire preuve de prudence dans son utilisation afin d’éviter des conséquences néfastes dans les affaires judiciaires.
En attendant, cette affaire a suscité l’attention et la raillerie de la communauté juridique en ligne, mettant en lumière les risques potentiels de l’utilisation inappropriée de l’IA dans le domaine du droit.
Source : LegalEagle | Date : 2023-06-10 15:06:33 | Durée : 00:28:49
⚖ Was I too harsh on these guys?
📌 Check out https://legaleagle.link/80000 for a free career guide from 80,000 Hours!
how
simply
how
tf
So how did it go after?
so, regardless of the quality of the plaintiff's claim, it seems unfair that that this became a trial of his attorneys, presumably ones he had no knowledge or could be expected to have knowledge that they were doing this.
Wouldn't the thing to do be to dismiss the case without prejudice, meaning he could file again with a separate legal team, then have a separate action for sanctioning the lawyers where he didn't have to be present, unless there was some evidence or reasonable suspicion he was aware of what was going on or had solicited it in some way?
It doesn't seem to be a reasonable thing to expect of plaintiffs that they should make sure their legal team isn't doing stuff like this, as one of the purposes of hiring lawyers is to deal with the legal research and procedural work that the average citizen could not expect to be capable of or even understand.
On that note, for criminal defendants, ignorance of the law is no defense, but for many things, i. e. the Rittenhouse case, one need to be a legal expert to conclude what constitutes lawful action under the particular circumstances relative to the specific laws and precedents relevant in that jurisdiction.
This seems problematic especially in self defense cases as it would require people to have expert legal familiarity with all the nuances of such and make split second determinations of how they would apply in a life threatening situation one did not anticipate.
Now, one could say simply don't do things that would put one in such a potential situation, but that greatly chills both the right to protest and the right to self defense if a protest has the potential to turn volatile, which increasingly is probably with the number and frequency of semi-militant groups who go into a protest expecting physical conflict but try to bait the other side into action which could constitute illegal violence while themselves remaining within what would be the bounds of self defense with a partial motive to paint the opposing group as the law breakers.
Seems like a conundrum.
Seriously this entire case just makes wince soo hard. I'm not even a lawyer and even using chat gpt for something like this makes me wince.
I knid of wonder if the client can sue their lawyers for screwing up their case like this? 😀
Devin, Devin, Devin. Did Chat GPT edit the pop up texts? The text bar had a pop up of "opposing council," which is a group of government administrators, like a city council, and not those who practice law, aka counsel. Otherwise, great video!
I'm all pro-AI and Pro-ChatGPT, but boy, make sure you do your due diligence and checks. Don't just ask for the case and as if it's legit, go and check it. Also, that's the difference between Chatgpt and Google Bard – when you use Bard, you can check the google search results that influenced the output.
Year of the experation date was also corrected by ink on the notary seal from Mr. Schwartz. I thought that was also a no no?
This is rather hilarious especially your presentation and explanation 👍🏼
I'm no lawyer, but Legal Eagle has taught me a few things.
But even I know to read an entire case law to ensure that the arguments would fit with my arguments and the eventual ruling was in favor of the side that was argued successfully.
All I really see here are a bunch of lawyers who were billing by the hour and either relied upon an untrained or poorly trained legal assistant, who resorted to Chat GPT to get their boss to not berate them for not working faster. So Swartz actually was the first really lazy dominoe, the other law firm relied too heavily on the 1st law firm to have done all the heavy lifting and could therefore just file the brief without lifting a finger.
I'm betting this whole fiasco has law firms around the world setting up and taking notice, not just here in the States. Probably causing alot of paralegals a bunch of grief and frustration due to a sudden surge in micro management to ensure that their firms are not caught in even the tiniest of mistakes.
I hope that lawyers and courts around the world are ensuring this kind of thing doesn't happen to them.
🎉😅😮
That's a perfect example of Hanlon's Razor.
They weren't working in an intent to fraud, they were working with a lot of incompetence.
This is the most manic video I've seen on this channel.
Understandable though, it's painful xD
.
Meanwhile I happen to know that if this serving cart were to be pushed with such a force that it quote "incapacitated him"…the damn cart would have broken before any actual harm was done
So he gets nothing? 38 hours could lose you a job…
I dont care if its from taxpayer money there needs to be people fired and compensation given until we get it right!
Before he pulled that book out I thought it was a green screen
I'm sure that robots lawyer would be possible. But they had to be trained as lawyer, not chatbots. 😀
Can Mata sue those attorneys for malpractice or something?
"I studied law from chatgpt"
25:30 I love how Devin totally goes off on this, it's very funny
AI is ready for primetime. ChatGPT is tame and is so on purpose. It's pumped out for the masses. Do you really think powerful AI would be panned out for free online so Joe and Billy can use it?
when you passed law school with the bare minimum score
here's an idea, get your knee out of the aisle then the serving cart won't be an issue. CASE CLOSED
As an AI Expert, it never ceases to amaze me the degree to which ordinary people (or even professionals) are willing to grant "god-like" trust in anything a computer does.
Back in the 1960 the program Elisa was developed as a demonstration of that fact. The program merely rearranged user inputs into a question back to the user. Yet Psychologists everywhere hailed it as the beginning of computer psychotherapy, while the program itself knew no therapy procedures — just converted anything said into a question.
Of course I'm sure with the advent of the PC that everyone believed it would do everything for them or their business. We now know that it was just a tool and little else. We still had to do most of the work ourselves.
And herewe repeat ourselves again with our beliefs in AI. Even today when I try to describe AI they don't hear what I tell them and cling to some version of "Terminator" -level technology. We are NOT there yet.
As an AI Expert let me describe AI in simple terms. It is a Pattern-Matching system and that's all. It doesn't Think. It doesn't know Logic (which is essential in Law). Just Pattern-Matching.
The core routine is Matrix Math (taugjt in High School) which is the primary circuit of a GPU. With this single function one can Rotate an image, Scale an image, Move an image or Skew any image. And does this in order to make a match (one in memory to one from a camera).
Thus it can match signs, lights, text, and cars or other objects.
Also if you had 3 images (road ahead, road edge, dotted line) if the best match is road ahead then steer straight. If best match is road edge, then steer left. If best match is dotted line, then steer right. These and other simple rules and you have a Self Drive Car.
Simple so far?
An image is a 2D object but it can also do 3D (for positions in space) or even 1D such as text. All using the same Matrix Math.
And ChatGPT does the same thing with text as images above. The source it uses come from web pages and chat geoups. It looks for the best pattern-match and returns that. It doesn't even know what it says and will follow any matching pattern to your input. It can look like its thinking but the only ones alive are the people who wrote the text.
Worse, as a simple program it will come up with ANY answer rather than none — even if the match is low so long as it doesn't get a better match in the seconds allowed by the algorithm.
Try asking ChatGPT about its childhood or anything else obviously impossible and you'll see what I mean. You xould get it to argue the Earth eas Flat simply because someone out there has. Hasn't anyone told you "Don't believe everything you read on the Internet"? Well, that's its source. Whatever anyone can write.
We have seen some amazing things done by mixing & blending patterns– all the way up to driving cars. But don't be overawed by that. We still have a ways to go to actually make a Thinking Machine.
One of these missing elements is a Logic Layer. Something that understands what is & isn't Logical. One that can analyze and differentiate Fact from Opinions or the illogic.
Wouldn't you think that is the primary function in Law? A good logical argument. Well, a Pattern-Matching system is NOT that.
Anyway, I hope this helps you understand AI and the state of the art.
I just rewatched this again, and am still facepalming at the original guy. How could he have sat on his "goldmine" of a leg injury for over two years? Or was he so busy getting laughed out of lawyers offices he wasted the deadline because no one would take his bogus case
Like, those carts are sharp and heavy, but I cannot picture one in a position where it could do "grievous" harm unless you were in an actively dangerous situation (plane emergency) or you are being a complete moron and ram your own leg into it. And not reporting it to the flight staff or ground crew on arrival… I feel like he was thoroughly BS'ing and just found someone willing to entertain them with dollar signs in their eyes… and chatGPT in their legal office
Some weeks ago I entered the topic of my current seminar paper (Judicial election process in the USA and Germany in comparision) and it gave me the answer that american judges typically hold lifeliong offices, while in Germany they have tenures. Which is somewhat true, for the Supreme Court and the Federal Constitional Court, that is. It sadly didn't tell me or even hinted that for the rest of all the Courts the reverse is much closer to the truth. It shows the problem with a system runnng on how much is written about a topic and that isn't concerned with (important) details. Just thought that was interesting.
This would make a good movie.
We getting unlawyered with this one
I know it's frustrating to Devin, but I loved when he started ranting about the lawyer getting the definitions of the legal citing incorrect. I don't know what it is, but I really enjoy seeing folks who are super knowledgeable and passionate about an area of expertise lose their minds at someone doing something blatantly wrong. It shows just how much they love what they've learned.
Chat gbt is 3 years old. Its going to get better.
One of the mathematics professors I know had chat gpt write proofs for some standard real analysis theorems and made master’s students correct them as homework, very frustrating assignment and a valuable lesson in how much chat GPT really does not know what it’s doing. One of the theorems the professor just made up and and was easily disproven by like, a calc 1 level counter example, but chat gpt did a proof anyway:)
One thing that I expect to happen within the foreseeable future is that judgements will be (partly) written by AI. It's not a creative process. You have some input variables, and out comes a legal document.
you dont even say what happeed to them?
You'd think a lawyer would read the disclaimers for chatgpt.
i think it’s hilarious that really the only borrowed words from german to english define feelings about someone else’s anguish