• 0 Posts
  • 94 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle

  • AI in health and medtech has been around and in the field for ages. However, two persistent challenges make roll out slow-- and they’re not going anywhere because of the stakes at hand.

    The first is just straight regulatory. Regulators don’t have a very good or very consistent working framework to apply to to these technologies, but that’s in part due to how vast the field is in terms of application. The second is somewhat related to the first but really is also very market driven, and that is the issue of explainability of outputs. Regulators generally want it of course, but also customers (i.e., doctors) don’t just want predictions/detections, but want and need to understand why a model “thinks” what it does. Doing that in a way that does not itself require significant training in the data and computer science underlying the particular model and architecture is often pretty damned hard.

    I think it’s an enormous oversimplification to say modern AI is just “fancy signal processing” unless all inference, including that done by humans, is also just signal processing. Modern AI applies rules it is given, explicitly or by virtue of complex pattern identification, to inputs to produce outputs according to those “given” rules. Now, what no current AI can really do is synthesize new rules uncoupled from the act of pattern matching. Effectively, a priori reasoning is still out of scope for the most part, but the reality is that that simply is not necessary for an enormous portion of the value proposition of “AI” to be realized.


  • Summary judgement is not a thing separate from a lawsuit. It’s literally a standard filling made in nearly every lawsuit (even if just as a hail mary). You referenced “beyond a reasonable doubt” earlier. This is also not the standard used in (US) civil cases–it’s typically a standard consisting of the preponderance of the evidence.

    I’m also not sure what you mean by “court approved documentation.” Different jurisdictions approach contract law differently, but courts don’t “approve” most contracts–parties allege there was a binding and contractual agreement, present their evidence to the court, and a mix of judge and jury determines whether under the jurisdictions laws and enforceable agreement occurred and how it can be enforced (i.e., are the obligations severable, what damages, etc.).


  • There’s plenty you could do if no label was produced with a sufficiently high confidence. These are continuous systems, so the idea of “rerunning” the model isn’t that crazy, but you could pair that with an automatic decrease in speed to generate more frames, stop the whole vehicle (safely of course), divert path, and I’m sure plenty more an actual domain and subject matter expert might come up with–or a whole team of them. But while we’re on the topic, it’s not really right to even label these confidence intervals as such–they’re just output weighting associated with respective levels. We’ve sort of decided they vaguely match up to something kind of sort approximate to confidence values but they aren’t based on a ground truth like I’m understanding your comment to imply–they entirely derive out of the trained model weights and their confluence. Don’t really have anywhere to go with that thought beyond the observation itself.




  • Maybe I’m wrong, and definitely correct me if so, but I thought the houthis formed well before the Saudi lead effective genocide occurring in Yemen. In fact, the current conflict is the result of the houthis basically couping the preceding government? If that’s the case, it doesn’t make much sense to characterize them as a resistance or reactionary force to anything externally?





  • What Hamas did had a totally unsurprising result, as distressing as that is, that inevitably resulted in, and continues to result in, unacceptable civilian suffering. Moreover, even though Israel has slow walked towards a regressive ethnofascist state since at least the Rabin’s assassination, the October attack straight up merced whatever dwindling progressive peace activist movements on the Israeli side that were still continuing the struggle.

    One can be anti-Zionist and pro-basic human decency without romanticizing a violent religious fundamentalist organization that has at this point done almost nothing but harm to the interests and well-being of the people whom it purports to be protecting.


  • Do you have actual examples? I’ve lived here (greater Boston area) for over 10 years now and, having grown up in and around WI, the level of racism here strikes me as virtually non-existent in comparison, and where you do see it (lots of systemic artifacts), there is a constant and loud pressure to address it.

    I’m not saying that Boston doesn’t have issues with racism, at many levels and in many ways, but I hear people claim it’s exceptionally worse than elsewhere in the country and I’m just confused. Had family friends from Chicagoland suburbs try to convince me Chicago has less racism issues than Boston and I just can’t imagine what the fuck happened to their brains to think that.








  • My point is just that they’re effectively describing a discriminator. Like, yeah, it entails a lot more tough problems to be tackled than that sentence makes it seem, but it’s a known and very active area of ML. Sure, there may be other metadata and contextual features to discriminate upon, but eventually those heuristics will inevitably be closed up and we’ll just end up with a giant distributed, quasi-federated GAN. Which, setting aside the externalities that I’m skeptical anyone in a position of power to address is equally in an informed position of understanding, is kind of neat in a vacuum.