navigation
  • Hey. Large chest people that want it to be smaller and flatter. I have a tip for you.

    I am a trans man. I have an h cup chest. That is not a typo, not a brag, and not an invitation to sexually harass me. This means I have about 4 pounds of breast. This means that binders do not work for me. There’s not enough structure in the compression to keep that much weight in place.

    I wore a sports bra under my binder, for a time- it kept things in place, and the binder flattened. This isn’t really safe and I recommend against it. It also never actually got me looking masc- I tended to look like I had between a c or b cup. TransTape I discarded too- it’s just not sturdy enough.

    Enter Enell. Specifically, the Enell Sport High Impact Bra.

    image

    I want you to look at the construction of that sports bra. It clasps in the front. This flattens the chest. And since it’s a sports bra designed for busty people, it LOCKS everything in place. When I wear my Enell sports bra, I do not bounce. It also gets me looking like I have an a cup at worst- and at best, when I layer, I actually look masc.

    Admittedly, they’re not cheap. That one’s 66$. But I’ve tried even custom binders, and they don’t work as well as Enell. I was actually contemplating a custom built corset before I found Enell. Enell is also much, much safer than layering compression, since it is being used as intended (sort of). As a bonus, you can actually exercise in it- it’s a sports bra!

    I will note that they use their own sizing system, so you will have to measure yourself.

    Happy binding!

  • I’d also like to note that you can ask for this even if you’re closeted and scared without raising a flag. Just say you want to take up running, or if you’re already sporty, that it’ll help with that. It’s technically not a lie- it’s a great sports bra.

  • hell if you’re busty you don’t even need to be getting active, you can just say that you’re having back pain and want to try something new to keep them in place.

  • For the darlings that bind

  • Leaving aside the whole debate about the ethics of AI art and copyright, I think one of my biggest gripes with the AI art industry is that generative AI art has this natural tendency towards producing weird and surreal imagery that I actually think DOES have a lot of artistic merit and potential if explored and leaned into as one of the unique strengths of the medium.

    Like, when AI image generators were at the stage imbetween the vaguely recognizable imagery produced by neuralblender and the type of generators we're seeing today, they were producing really fascinating imagery that I'd argue had value as a contribution to the art landscape that was entirely unique to AI, since the weird surreal quality of the images was the result of Machine Learning programs interpreting words and images in a fundamentally different way than humans do.

    image

    Like i'd argue shit like this indisputably has a place as its own artistic style/medium, it's surreal and weird in ways which are completely distinct from what a human artist could produce because its unique strengths come from details that are inscrutable, ambiguous, and hard to parse to the human mind, which a human artist would have an extremely hard time mentally visializing, let alone translatong into an art piece.

    But since the main selling point of AI art for both the people making these generators and the teach aficinados who are a little too into them is that AI art can serve as a cheaper/faster replacement and/or alternative for the work of human artists, progress is measured not in terms of how well they can use and explore the distincly non-human quality of AI art, but instead in terms of how well they can supress it to make it more closely mimic the work of human artists. So all advancement in the tech is geared towards progressively getting rid of the things I find artistically interesting about the medium instead of towards leaning into them as strengths that give it a unique, artistically worthwile style.

    Like, I don't think AI art is inherently "soulless" or devoid of artistic merit, but I do think the focus on trying to make it increasingly indistinguishable from art produced by humans strips away the things that gave it artistic merit to me. This thing can produce imagery that is weird and wild and hard for us to even conceive but the profit motive's tendency towards rewarding homogenization has neutered that to turn it into a factory of increasingly bland, generic, serviceable imagery.

  • One of the ways I’ve been thinking of AI art is through the arborescent (representational) vs rhizomatic (abstract) framework. The explanation I’m familiar with comes from Deleuze and Guattari: basically, when you take an art form you can see it as trying to represent something else (e.g. a still life painting), or you can explore what it does when you discard its need to represent something else* and just explore what paint can do that no other medium can -- the basis for many breakthroughs in abstract art.

    And I think this is really the issue that you’re pointing out: AI art is specifically being driven away from its abstract capabilities in order to get better at representing things.

    Now, there’s nothing bad about representational art in and of itself -- I’m not one of those snobs who uses it to determine what’s “real art” or not. But I think that the overwhelming push towards training it to be representational rather than abstract is tied to the goals of using it to mimic other art forms (e.g. photos and paintings) in order to undermine people who use those art forms professionally, because representational art is more interchangeable than abstract art. Not completely interchangeable of course -- there’s multiple reasons we use photos rather than painted portraits even if they’re capturing the same representation -- but the point of rhizomatic art is that it’s based on exploring the capabilities of its medium. What those capabilities are is specific to the medium, and you can’t just copy & paste them across different media. This isn’t profitable to people who want to mimic a photograph.

    Which is basically what you were getting at -- AI produces abstract art due to a function of its medium, specifically the fact that machine learning produces patterns that would never occur to a human.

    Something I think is interesting about AI compared to other art forms is that its abstract and representational modes are in tension. Any painter of a still life could learn to paint abstractly. Ditto photography. But the more you train an AI to “correctly” represent things in its art, the less useful it is for generating abstract art, because you’re directly pushing the medium and changing it in one direction or another.

    I don’t have a snappy conclusion to this post, just wanted to put these frameworks into people’s brains. Keep AI Art Weird 2023!

    *the arborescent/rhizomatic terminology is because representational approaches work like a tree, with the trunk as the original thing and the branches being different representations of it. This is different to the rhizomatic (like potatoes) approach, where the plant can grow in any way and anything can connect to anything else.

  • flufflecat:
“flufflecat:
“almost time
”
it’s time
”
  • almost time

  • it’s time

  • funkyfunderberker:
“fished through my tumblr over dinner tonight to find this post bc i quote it all the time and i wanted to show my pal who’s a twin. his face fell. “that’s us”
his eyes were bloodshot and his mouth agape. i think he’s just in awe...
  • fished through my tumblr over dinner tonight to find this post bc i quote it all the time and i wanted to show my pal who’s a twin. his face fell. “that’s us”

    his eyes were bloodshot and his mouth agape. i think he’s just in awe at how funny it is and i go “lol who’s sniff and who’s whimper” and he goes. “no. THAT’S US.”

    called his brother to get here asap with the hard drives of the day they were born, spent the next hour doing a deep dive to find the source of this image and analyzing the video. the only differences are the sheet and crib they’re in but we think they may have been moved to a secondary location between the video and this image because their features are identical and the hats are the EXACT same down to how they’re resting on their heads, and they were not provided by the hospital.

    i quote sniff and whimper every day. i show everybody i know this gif i think it is that funny. my friend and i were laying on the ground like two hours before dinner going “i’m sniff..” “i’m whimper!” in little voices.

    i fucking know sniff and whimper. i’ve known sniff and whimper all along.

    image
    image
    image
    image
  • image
  • image
  • all the frothing-at-the-mouth posts about how "don't you dare put a fic writer's work into chatGPT or an artist's work into stable diffusion" are. frustrating

    that isn't how big models are made. it takes an absurd amount of compute power and coordination between many GPUs to re-train a model with billions of parameters. they are not dynamically crunching up anything you put into a web interface.

    chances are, if you have something published on a fanfic site, or your art is on deviantart or any publicly available repository, it's already in the enormous datasets that they are using to train. and if it isn't in now, it will be in future: the increases in performance from GPT 2 to 3 to 4 were not gained through novel machine-learning architectures or anything but by ramping up the amount of data they used to train by orders of magnitude. if it can be scraped, just assume it will be. you can prevent your stuff from being used with Glaze, if you're an artist, but for the written word there's nothing you can do.

    not to be cynical but the genie is already far more out of the bottle than most anti-AI people realize, i think. there is nothing you can do to stop these models from being made and getting more powerful. only the organizing power of labor has a shot at mitigating some of the effects we're all worried about

  • image

    this post had over 10k notes and lots of people in replies getting very angry and panicky and threatening imaginary bad actors and begging people not to put their fics into chatgpt. the reply is authoritatively saying "anything that is given to AI it can use it later to draw from." no source! like - i don't know if they save your prompts. they probably do for some other nefarious purposes. but:

    image

    these are the size of the training sets used to train gpt-3. as a rule of thumb in natural language processing, one word is on average two tokens. the common crawl dataset alone is around 205 billion words; for gpt-3 they don't even manage to use all of it. this is the scale of the data they need. they are not re-training their model with the little prompts you put in, and even if they did, it's like... a drop of water in the ocean. it's not gonna have an effect on how the model behaves.

    i think people are, on a gut level, still understanding these models as "collage machines." they're not. they are not borg-assimilating all your best ideas from your fics to frankenstein them back together. they are statistical models. they are compressing gargantuan amounts of data down into smaller (still huge, but much smaller) models of that data by looking at trends and likelihoods and repetitions. i'm not saying you're a great person if you use gpt to autocomplete old fics but even if they were for some reason adding your prompts to their datasets, it's not gonna have an effect.

    the culture on here about anti-ai stuff has approached, like, mythology - making up shit about what they can do, talking about how scary they are, ghost stories, moral panic. this wild overstatement about what they can do only benefits the companies selling them, and those trying to use them as pretense to undermine labor.

  • Honestly, I think part of the problem is that we’ve allowed the companies shilling these models to call them “AI” with relatively little pushback. Remember when one- and two-wheeled personal conveyors - like a Segway without the handles - were rebranded as “hoverboards” in 2015 as a Back to the Future reference? It’s the same thing. And the problem here is that just like hoverboards don’t hover, “AI” isn’t intelligent. They’re just statistical learning models with sophisticated outputs.

    But allowing the companies to own the branding on them, and allowing that branding to be “AI”, invokes all the science fiction we’ve ever read. If you’ve been on TV Tropes for ten seconds you’ve seen the “AI Is a Crapshoot” page, and that’s kind of how society is treating these tools, when, honestly, they’re just web scrapers - fundamentally the same web scrapers people have been using for decades - and statistical models.

  • yeah, this is a good point. "AI" literally doesn't mean anything.

    It has referred to a range of different technologies since the 50s, some of them including no machine learning at all. I forget who coined it, but there's a lovely quote about how AI is just "whatever problem computers can't totally solve yet:" as soon as it's considered acceptably solved, the moniker moves on to the next big thing. (example: voice recognition systems, like the ones you talk to when calling tech support. didn't use to be a thing! used to be fancy and unreliable! now totally invisible, taken for granted)

  • [ID: First image is an anonymous ask reading "oh I wasn't aware it was feeding the AI. I've inserted hundreds of fics into ChatGPT for their continuation or for a different plot within the same context just for fun and out of curiosity... but I've never posted any of them...." the response is "Indeed, anything that is given to AI it can use later to draw from. That's why it doesn't matter if you post them or not as it now has access to those writers' texts without their permission." Second image shows a table charting five "Datasets" against their "Quantity (tokens)," "Weight in training mix," and "Epochs elapsed when training for 300B tokens." The table is labelled "Table 2.2: Datasets used to train GPT-3." The largest dataset is "Common Crawl (filtered)," with 410 billion tokens, a weight of 60%, and .44 epochs elapsed. The other datasets are WebText2 (19 billion tokens), Books1 (12 billion), Books 2 (55 billion,) and Wikipedia (3 billion). The table's caption reads: "Weight in training mix refers to the fraction of examples during training that are drawn from a given dataset, which we intentionally do not make proportional to the size of the dataset. As a result, when we train for 300 billion tokens, some datasets are seen up to 3.4 times during training while other datasets are seen less than once." end ID]

  • image

    i think i’m funny

  • happy pride

  • a drawing of DC character Wink, falling through the clouds. she and the area around her are in pinks and purples. the text above her reads "Catch Me".ALT

    Catch Me/Always

    OOPS ITS BEEN FOREVER HASN'T IT! here's my Wink piece for @dc-lgbt-zine - the Always half of the two page spread (featuring The Aerie) was done by @vigiilantism :)

  • image

    twin piece with @drgnbrst for @dc-lgbt-zine !

  • 1 2 3 4 5
    &. lilac theme by seyche