this problem actually broke my brain, but it works! compositing pre-rendered and 3d objects where the pre-rendered scene does not include z-data, but *does* include world-y. it's possible to derive depth using the view and clip matrices and their inverses, world-y, and just set it explicitly in the shader and have mixed 2d/3d scenes work out O_O

It's basically the Disco Elysium workflow, but done in Krita. The result is completely repainted and uses the rendered lighting. I use Blender to compute lighting and objectIDs. And finally the original concept generated by Stable Diffusion.

whats that?
o just normal things, you know. it's how you get the selection model in a krita plugin

extreme WIP. I have no idea how I feel about this workflow so far.
1) Stable Diffusion txt2img to conceptualize a scene
2) approximately model the scene and use the concept as textures
3) slowly replace the textures using DreamTexture

so, im almost done porting Re-Hearsed from SugarCube to Harlowe and it's starting to gel surprisingly well

TIL you can write custom Discord statuses. A hugely under-explored framing device

You can't put this in a bookclub review and expect me not to suggest it for bookclub!

tiny arachnid on a book 

i met a new friend at the park today

ever just stare at someone through time and space and wonder if there's someone doing the same to you?

I'm not sure if this is AI generated or not. I asked ChatGPT to write an equation like @bitartbot and it produced one that when rendered looks like this

Food (baking pic) 

It's the first boule I've done in a bit and it must be good bc the kiddos asked to eat plain bread :D

I'm playing Chibi-Robo! for gameclub and Helen is dropping the worst advice on Jenny. Stay strong Jenny! 🐸 💖

Show older
Frog Camp

Just a place for frogs on the internet