Talk nerdy to me :D

  • PixeIOrange@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    23 hours ago

    Radio transmitting. Its a quite large rabbit hole. Right now i upload some data of ships i receive with a small setup and earn a test crypto currency.

    But you can do loads and loads of things, rtl-sdr.com is a nice cheap start.

    Oh and locally running AI models. (Via GPT4All) Insane how far we are.

    • bmpvy@feddit.org
      link
      fedilink
      arrow-up
      2
      ·
      21 hours ago

      Locally running AI models is interesting to me - do you have any recommended links or tutorials where to start?

      (I looked into radio transmitting a couple month ago but it was way too overwhelming and I settled for a different hobby lol)

      • PixeIOrange@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        15 hours ago

        Like i said, look into GPT4All. At least for text gen. Open source and really simple. Download, install, choose a model (for mediocre laptops 4-8B ones are a good size) and chat.

        For radios, there are some cheap beginner devices as starting point. Look for the rtl-sdr blog v4 usb dongle for about 40€. You can receive ~1-1700mHz with it, which are loads of interesting frequencies. You can buy a 30€ quansheng uv k5 or uv k6 (which is the same model somehow) with custom rom for a great handheld vhf/uhf radio. You can buy a 30€ SI4732 based ATS-Mini as “world receiver” also with custom roms like hjberndt’s.

        You could invest some more and get a flipper zero for around 250€ which is a neat tamagochi like IT-Swiss-Army-Knife.

        Or a portapack H4M (the new cliffort heath version for the best hackrf one clone) for about 200€. Its the “big brother” of the flipper zero with fascinating capabilities like scanning surrounding ships and planes.

        Dont forget proper antennas which can escalate quickly.

      • AdrianTheFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        20 hours ago

        ollama is the usual one, they have install instructions on their GitHub i think, and a model repository, etc

        You can run something on your cpu if you don’t care about speed, or on your gpu although you can’t run any more intelligent model without a decent amount of vram

        For models to use, I recommend checking out the qwen distilled versions of deepseek r1