also at beehaw

  • 2 Posts
  • 69 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle



  • degree in Visual Art, work in digital asset management for a marketing (blech) studio. I’d love to get into a DAM position at somewhere less ethically awful, like a symphony or museum or something, buuut my position pays really well relatively speaking to other similar similar jobs I’ve looked at, so that’ll have to wait until I feel more established in life.

    took a couple basic comp-sci classes in college, though, and went to a coding bootcamp before I got my current position. running linux on my laptop, might switch to it on my desktop. I make use of bash for renaming files a lot at my job.

    there’s a lot about tech-heavy areas that interests me, but it’d drive me crazy to be around too much of it. I think there’s a lot of good in the liberal arts that tends to get missed by the sort of hard rationalists that tend to hang out in tech spaces.











  • My high school and college journals are filled with so much angst about crushes and “do they like me? don’t they like me?” that it’s physically difficult to re-read them now, hah.

    I had a crush on a redhead from about 10 until I left for college (it was a small town), then crushed on the various guys in my dorm and friend group (and one hot artist girl in a philosophy class) until I decided I needed to practice dating in junior year and actually went on a few thanks to Tinder. Though I didn’t escape entirely as I had a couple crushes on regular customers when I worked in an art supply store after graduating.

    Now I’m happily partnered and do not miss the mental anxiety of crushes, though there’s a twinge of excitement in the idea of having a crush that will always be nostalgic.





  • So I’m no expert at running local LLMs, but I did download one (the 7B vicuña model recommended by the LocalLLM subreddit wiki) and try my hand at training a LoRA on some structured data I have.

    Based on my experience, the VRAM available to you is going to be way more of a bottleneck than PCIe speeds.

    I could barely hold a 7B model in 10 GB of VRAM on my 3080, so 8 GB might be impossible or very tight. IMO to get good results with local models you really have large quantities of VRAM and be using 13B or above models.

    Additionally, when you’re training a LoRA the model + training data gets loaded into VRAM. My training dataset wasn’t very large, and even so, I kept running into VRAM constraints with training.

    In the end I concluded that in the current state, running a local LLM is an interesting exercise but only great on enthusiast level hardware with loads of VRAM (4090s etc).