• 14 Posts
  • 1.25K Comments
Joined 1 year ago
cake
Cake day: October 4th, 2023

help-circle





  • looks dubious

    The problem here is that if this is unreliable – and I’m skeptical that Google can produce a system that will work across-the-board – then you have a synthesized image that now has Google attesting to be non-synthetic.

    Maybe they can make it clear that this is a best-effort system, and that they only will flag some of them.

    There are a limited number of ways that I’m aware of to detect whether an image is edited.

    • If the image has been previously compressed via lossy compression, there are ways to modify the image to make the difference in artifacts in different points of the image more visible, or – I’m sure – statistically look for such artifacts.

    • If an image has been previously indexed by something like Google Images and Google has an index sufficient to permit Google to do fuzzy search for portions of the image, then they can identify an edited image because they can find the original.

    • It’s possible to try to identify light sources based on shading and specular in an image, and try to find points of the image that don’t match. There are complexities to this; for example, a surface might simply be shaded in such a way that it looks like light is shining on it, like if you have a realistic poster on a wall. For generation rather than photomanipulation, better generative AI systems will also probably tend to make this go away as they improve; it’s a flaw in the image.

    But none of these is a surefire mechanism.

    For AI-generated images, my guess is that there are some other routes.

    • Some images are going to have metadata attached. That’s trivial to strip, so not very good if someone is actually trying to fool people.

    • Maybe some generative AIs will try doing digital watermarks. I’m not very bullish on this approach. It’s a little harder to remove, but invariably, any kind of lossy compression is at odds with watermarks that aren’t very visible. As lossy compression gets better, it either automatically tends to strip watermarks – because lossy compression tries to remove data that doesn’t noticeably alter an image, and watermarks rely on hiding data there – or watermarks have to visibly alter the image. And that’s before people actively developing tools to strip them. And you’re never gonna get all the generative AIs out there adding digital watermarks.

    • I don’t know what the right terminology is, but my guess is that latent diffusion models try to approach a minimum error for some model during the iteration process. If you have a copy of the model used to generate the image, you can probably measure the error from what the model would predict – basically, how much one iteration would change an image or part of it. I’d guess that that only works well if you have a copy of the model in question or a model similar to it.

    I don’t think that any of those are likely surefire mechanisms either.










  • For me, video is rarely the form that I want to consume any content in. It’s also very obnoxious if I’m on a slow data link (e.g. on a slower or saturated cell phone link).

    However, sometimes it’s the only form that something is available in. For major news items, you can usually get a text-form article, but that isn’t all content. I submitted a link to a YouTube video of a Michael Kofman interview the other day talking about military aid to a Ukraine community. I also typed up a transcript, but it was something like an hour and a half, and I don’t know if that’s a reasonable bar to expect people to meet.

    I think that some of this isn’t that people actually want video, but that YouTube has an easy way to monetize video for content creators. I don’t think that there’s actually a good equivalent for independent creators of text, sadly-enough.

    And there are a few times that I do want video.

    And there may be some other people that prefer video.

    Video doesn’t actually hurt me much at this point, but it would kind of be nice to have a way to filter it out for people who don’t want it. Moving all video to another community seems like overkill, though. Think it might be better to have some mechanism added to Threadiverse clients to permit content filtering rules; I think that probably a better way to meet everyone’s wants. It’d also be nice if there were some way to clearly indicate that a link is video content, so that I can tell prior to clicking on it.




  • You can still get a few phones with built-in headphones jacks. They tend to be lower-end and small.

    I was just looking at phones with very long battery life yesterday, and I noticed that the phone currently at the top of the list I was looking at, a high-end, large, gaming phone, also had a headphones jack. The article also commented on how unusual that was.

    Think it was an Asus ROG something-or-other.

    kagis

    https://rog.asus.com/us/phones/rog-phone-8-pro/

    An Asus ROG Phone 8 Pro.

    That’s new and current. Midrange-and-up phones with audio jacks aren’t common, but they are out there.

    Honestly, I’d just get a USB C audio interface with pass-through PD so that you can still charge with it plugged in and just leave that plugged into your headphones if you want to use 1/8th inch headphones. It’s slightly more to carry around, but not that much more.

    Plus, the last smartphone I had with a built-in audio DAC would spill noise into the headphones output when charging. Very annoying. Needed better power circuitry. I don’t know if any given USB C audio interface avoids the issue, but if it’s built into the phone, there’s a limited amount you can do about it. If it’s external, you can swap it, and there’s the hope that their less-limited space constraints meant that they put in better power supply circuitry.


  • Babies do start to pick up on faces early on – we’ve got some hardwired stuff there – and on the mobiles there, the faces are away from the baby.

    https://www.whattoexpect.com/toddler/self-recognition/

    • At birth: Even though your baby doesn’t recognize you, she certainly likes the look of you. Studies have shown that even newborns, with their eyesight limited to about 12 inches, prefer to look at familiar faces — especially yours.

    • Months 2 to 4: Your baby will start to recognize her primary caregivers’ faces, and by the 4-month mark, she’ll recognize familiar faces and objects from a distance.

    Most of the complex details and shapes are facing away. Oddly, of the mobiles I see there, the few designs aimed at the baby are mostly black-and-white, not colorful, while I’d have also thought that color would be preferable.