Don’t court stenographers basically use tailored voice models and voice to text transcription already?
Don’t court stenographers basically use tailored voice models and voice to text transcription already?
AI in health and medtech has been around and in the field for ages. However, two persistent challenges make roll out slow-- and they’re not going anywhere because of the stakes at hand.
The first is just straight regulatory. Regulators don’t have a very good or very consistent working framework to apply to to these technologies, but that’s in part due to how vast the field is in terms of application. The second is somewhat related to the first but really is also very market driven, and that is the issue of explainability of outputs. Regulators generally want it of course, but also customers (i.e., doctors) don’t just want predictions/detections, but want and need to understand why a model “thinks” what it does. Doing that in a way that does not itself require significant training in the data and computer science underlying the particular model and architecture is often pretty damned hard.
I think it’s an enormous oversimplification to say modern AI is just “fancy signal processing” unless all inference, including that done by humans, is also just signal processing. Modern AI applies rules it is given, explicitly or by virtue of complex pattern identification, to inputs to produce outputs according to those “given” rules. Now, what no current AI can really do is synthesize new rules uncoupled from the act of pattern matching. Effectively, a priori reasoning is still out of scope for the most part, but the reality is that that simply is not necessary for an enormous portion of the value proposition of “AI” to be realized.
Summary judgement is not a thing separate from a lawsuit. It’s literally a standard filling made in nearly every lawsuit (even if just as a hail mary). You referenced “beyond a reasonable doubt” earlier. This is also not the standard used in (US) civil cases–it’s typically a standard consisting of the preponderance of the evidence.
I’m also not sure what you mean by “court approved documentation.” Different jurisdictions approach contract law differently, but courts don’t “approve” most contracts–parties allege there was a binding and contractual agreement, present their evidence to the court, and a mix of judge and jury determines whether under the jurisdictions laws and enforceable agreement occurred and how it can be enforced (i.e., are the obligations severable, what damages, etc.).
There’s plenty you could do if no label was produced with a sufficiently high confidence. These are continuous systems, so the idea of “rerunning” the model isn’t that crazy, but you could pair that with an automatic decrease in speed to generate more frames, stop the whole vehicle (safely of course), divert path, and I’m sure plenty more an actual domain and subject matter expert might come up with–or a whole team of them. But while we’re on the topic, it’s not really right to even label these confidence intervals as such–they’re just output weighting associated with respective levels. We’ve sort of decided they vaguely match up to something kind of sort approximate to confidence values but they aren’t based on a ground truth like I’m understanding your comment to imply–they entirely derive out of the trained model weights and their confluence. Don’t really have anywhere to go with that thought beyond the observation itself.
Are you under the impression that I think Teslas approach to AI and computer vision is anything but fucking dumb? The person said a stupid and patently incorrect thing. I corrected them. Confidence values being literally baked into how most ML architectures work is unrelated to intentionally depriving your system of one of the most robust ccomputer vision signals we can come up with right now.
All probabilistic models output a confidence value, and it’s very common and basic practice to gate downstream processes around that value. This person just doesn’t know what they’re talking about. Though, that puts them on about the same footing as Elono when it comes to AI/ML.
I’ve worked on processing submissions for this project. Honestly, it probably ends up just costing them more to do this program, which is mostly just a paid PR activity. The overwhelming majority of submissions, and I mean like 99%, are either not prior art in the sense of patent law or were already retrieved by the law firm on the case.
My z flip is hands down my favorite phone I’ve ever owned and I didn’t get it expecting to like it much. I just needed a new phone and with Samsung’s recycling program, my old near-tablet sized phone made the switch like barely 100 bucks.
There are a lot of small advantages it provides that quickly add up to it being an overall superior experience. Now if only Bixby wasn’t the worst fucking thing ever.
Their non-profit status had nothing to do with the legality of their training data acquisition methods. Some of it was still legal and some of it was still illegal (torrenting a bunch of books off a piracy site).
Oh wow, this suit is shaping up to be silly. I didn’t realize it was filed in Japan, too. That makes the patent aspect even shakier. Japan has no discovery process like in the US, which is generally very necessary for many software-related patents as, assuming they have a strong likelihood of surviving challenge, they are typically drawn to processes that are completely obfuscated from the user and outside observes.
There is an era of patents from the late 90s through the early-mid-00s that were insanely vague and rarely stand up to scrutiny, but most are expiring at this point, if they haven’t already. Generally, though, patents are not granted on “concepts” but on implementations. That’s a sometimes ambiguous line, but that’s a fundamental principle of modern patents.
My point is just that they’re effectively describing a discriminator. Like, yeah, it entails a lot more tough problems to be tackled than that sentence makes it seem, but it’s a known and very active area of ML. Sure, there may be other metadata and contextual features to discriminate upon, but eventually those heuristics will inevitably be closed up and we’ll just end up with a giant distributed, quasi-federated GAN. Which, setting aside the externalities that I’m skeptical anyone in a position of power to address is equally in an informed position of understanding, is kind of neat in a vacuum.
Yes, it’s called a GAN and has been a fundamental technique in ML for years.
I think if you can actually define reasoning, your comments (and those like yours) would be much more convincing. I’m just calling yours out because I’ve seen you up and down in this thread repeating it, but it’s a general observed of the vocal critics of the technology overall. Neither intelligence nor reasons (likewise understanding and knowing, for that matter) are easily defined in a way that is more useful than invoking spirits and ghosts. In this case, detecting patterns certainly seems a critical component of what we would consider to be reasoning. I don’t think it’s sufficient, buy it is absolutely necessary.
Genetic algorithms is a sort of broad category and there’s certainly ways you could federate and parallelize. I think autoML basically applies this within the ML space (multiple trainings explore a solution topology and convergence progress is compared between epochs, with low performers dropping out). Keep in mind, you can also use a genetic algorithm to learn how to explore an old fashioned state tree.
Like I’ve said, you are arguing this into nuanced aspects of copyright law that are absolutely not basic, but I do not agree at all with your assessment of the initial reproduction of the image in a computer’s memory. First, to be clear, what you are arguing is that images on a website are licensed to the host to be reproduced for non-commercial purposes only and that such downstream access may only be non-commercial (defined very broadly–there is absolutely a strong argument here that commercial activity in this situation means direct commercial use of the reproduction; for example, you wouldn’t say that a user who gets paid to look at images is commercially using the accessed images) or it violates the license. Now, even ignoring my parentheses, there are contract law and copyright law issues with this. Again, using thumbs and, honestly, I’m not trying to write a legal brief as a result of a random reply on lemmy, but the crux is that it is questionable whether you can enforce licensing terms that are presented to a licensee AFTER you enable, if not force, them to perform the act of copying your work. Effectively, you allowed them to make a copy of the work, and then you are trying to say "actually, you can only do x, y, and z with that particular copy–and this is also where exhaustion rears its head when you add on your position that once a trained model switches from non-commercial deployment to commercial deployment it can suddenly retroactively recharacterize the initial use as unlicensed infringement. Logistically, it just doesn’t make sense either (for example, what happens when a further downstream user commercializes the model? Does that percolate back to recharacterize the original use? What about downstream from that? How deep into a toolchain history do you need to go to break time traveling egregious breach of exhaustion?) so I have a hard time accepting it.
Now, in response to your query wrt my edit, my point was that infringement happens when you do the further downstream reproduction of the image. When you print a unicorn on a t-shirt, it’s that printing that is the infringement. The commercial aspect has absolutely no bearing on whether an infringement occurs. It is relevant to damages and the fair use affirmative defense. The sole query of whether infringement has occurred is whether a copy has been made and thus violated the copyright.
And all this is just about whether there is even a copying at the training of the models stage. This doesn’t get into a fairly challenging fair use analysis (going by SCotUS’ reasoning on copyrightability of API in Oracle v Google, I actually think the fair use defense is very strong, but I also don’t think there is an infringement happening to even necessitate such an analysis so ymmv–also, that decision was terrible and literally every time the SCotUS has touched IP issues, it has made the law wildly worse and more expensive and time-consuming to deal with). It also doesn’t get into whether outputs that are very similar to works infringe in the way music does (even though there is no actual copying–I think it highly likely it is an infringement). It also also doesn’t get into how outputs might infringe even though there is no IP rights in the outputs of a generative architecture (this probably is more a weird academic issue but I like it nonetheless). Oh, and likeness rights haven’t made their way into the discussion (and the incredible weirdness of a class action that includes right of publicity among its claims).
We can, and probably will, disagree on how IP law works here. That’s cool. I’m not trying to litigate it on lemmy. My point in my replies at this point is just to show that it is not “basic copyright law bruh”. The copyright law, and all the IP law really, around generative AI techniques is fairly complicated and nuanced. It’s totally reasonable to hold the position that our current IP laws do not really address this the way most seem to want it to. In fact, most other IP attorneys I’ve talked to with an understanding of the technical processes at hand seem to agree. And, again, I don’t think that further assetizing intangibles into a “right to extract machine learning from” is a viable path forward in the mid and long run, nor one that benefits anyone but highly monied corporate actors either.
No, this is mostly incorrect, sorry. The commercial aspect of the reproduction is not relevant to whether it is an infringement–it is simply a factor in damages and Fair Use defense (an affirmative defense that presupposes infringement).
What you are getting at when it applies to this particular type of AI is effectively whether it would be a fair use, presupposing there is copying amounting to copyright infringement. And what I am saying is that, ignoring certain stupid behavior like torrenting a shit ton of text to keep a local store of training data, there is no copying happening as a matter of necessity. There may be copying as a matter of stupidity, but it isn’t necessary to the way the technology works.
Now, I know, you’re raging and swearing right now because you think that downloading the data into cache constitutes an unlawful copying–but it presumably does not if it is accessed like any other content on the internet. Because intent is not a part of what makes that a lawful or unlawful copying and once a lawful distribution is made, principles of exhaustion begin to kick in and we start getting into really nuanced areas of IP law that I don’t feel like delving into with my thumbs, but ultimate the point is that it isn’t “basic copyright law.” But if intent is determinitive of whether there is copying in the first place, how does that jive with an actor not making copies for themselves but rather accessing retained data in a third party’s cache after they grab the data for noncommercial purposes? Also, how does that make sense if the model is being trained for purely research purposes? And then perhaps that model is leveraged commercially after development? Your analysis, assuming it’s correct arguendo, leaves far too many outstanding substantive issues to be the ruling approach.
EDIT: also, if you download images from deviantart with the purpose of using them to make shirts or other commercial endeavor, that has no bearing on whether the download was infringing. Presumably, you downloaded via the tools provided by DA. The infringement happens when you reproduce the images for the commercial (though any redistribute is actually infringing) purpose.
Yes, inadvertent copying is still copying, but it would be copying in the output and is not evidence of copying happening in the creation of the model. That was why I used the music example, because it is rather probative of where there could be grounds for copyright infringement related to these model architectures. This may not seem an important distinction, but it has significant consequences on who is ultimately liable and how.
I get that that’s how it feels given how it’s being reported, but the reality is that due to the way this sort of ML works, what internet archive does and what an arbitrary GPT does are completely different, with the former being an explicit and straightforward copy relying on Fair Use defense and the latter being the industrialized version of intensive note taking into a notebook full of such notes while reading a book. That the outputs of such models are totally devoid of IP protections actually makes a pretty big difference imo in their usefulness to the entities we’re most concerned about, but that certainly doesn’t address the economic dilemma of putting an entire sector of labor at risk in narrow areas.
??? It is literally impossible for any voter to not know the devil they chose. No, over 70 million voters actively chose to elect perhaps the most incompetent and transparently stupid president in history back into office, but with a well known and well documented playbook this time around on how literally entry metric of American life, from domestic policy to foreign policy, will be made worse to the sole benefit of big corporate actors and 1%ers. A whole bunch of others were too apathetic to be concerned by this.
Voters ultimately made their choice. A lot of folks are going to die as a result, but unfortunately it won’t be limited to just the idiots that actually chose this.