And why?

  • mlfh@lemmy.ml
    link
    fedilink
    arrow-up
    90
    ·
    1 month ago

    Forgejo, a Gitea fork used by Codeberg. I chose it because it’s got the right balance of features to weight for my small use case, it has FOSS spirit, and it’s got a lovely package maintainer for FreeBSD that makes deployment and maintenance easy peasy (thanks Stefan <3).

    • zelifcam@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      1 month ago

      I’ve been meaning to switch over from Gitea to Forgejo for ever. I’ll get it done tomorrow ;)

      • Foster Hangdaan@lemmy.fosterhangdaan.com
        link
        fedilink
        arrow-up
        15
        ·
        edit-2
        1 month ago

        Definitely best to get that done ASAP. Forgejo being a drop-in replacement for Gitea won’t be guaranteed ever since the hard fork:

        To continue living by that statement, a decision was made in early 2024 to become a hard fork. By doing so, Forgejo is no longer bound to Gitea, and can forge its own path going forward, allowing maintainers and contributors to reduce tech debt at a much higher pace, and implement changes - whether they’re new features or bug fixes - that would otherwise have a high risk of conflicting with changes made in Gitea.

    • thirdBreakfast@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      +1 for Forgejo. I started on Gogs, then gathered that there had been some drama with that and Gitea. Forgejo is FOSS, simple to get going, and comfortable to use if you’re coming from GitHub. It’s actively maintained, and communication with the project is great.

    • PlexSheep@infosec.pub
      link
      fedilink
      arrow-up
      3
      ·
      1 month ago

      I do the same. Forgejo works really well, and I’m also absolutely stoked for forge fed some day.

      It also has things like CI/CD. It’s a really really good project and self hosting it is relatively painless. Even integrating it with my identity provider over oidc was no problem.

  • m4m4m4m4@lemmy.world
    link
    fedilink
    arrow-up
    55
    arrow-down
    1
    ·
    1 month ago

    Codeberg. I host my web portfolio live there and even did a small contribution to kbin when it was alive. It’s great though now I’d want to look at forgejo.

  • ExtremeDullard@lemmy.sdf.org
    link
    fedilink
    arrow-up
    43
    arrow-down
    12
    ·
    edit-2
    1 month ago

    I use Github for 4 reasons:

    • Everybody else is on Github. Github is to repo hosting what Youtube is to video hosting. It’s sad but that’s how it is in this world of unchecked, extreme big tech monopolization. So I put my stuff up there because it’s just simpler to be found.
    • I use Github as a dumb git repo. I don’t use any of the extra social media garbage Microsoft tacked onto it. So I get free hosting and Microsoft pretty much gets no data on me - i.e. I’m a net loss to them.
    • You can use dumb repos as PPA and RPM sources, if you need to distribute Debian or Redhat packages. Microsoft never intented for repos to be used this way, but if I can abuse Microsoft services, I will six ways to Sunday.
    • Github lets you drop videos in your README.md. But here’s a trick: you can use the links to the video files anywhere. In other words, you can use Github to host videos that you can post on other forums - including here on Lemmy, or on Reddit if you’re still patronizing that cesspit for some reason. I find this a nice way to abuse Microsoft’s resources also, and I’m all for abusing Microsoft’s resources.

    TL;DR: I use Github not only because it’s the most prevalent git hosting service out there, but because I can abuse it and make Microsoft pay for the abuse without getting anything of value from me in return.

    • theshatterstone54@feddit.uk
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      1 month ago

      I’m actually continuously running github actions that I don’t need running, just because I can, and because it uses up their resources.

      That’s something I really like about Ublue: they use Github actions, so if you build a custom image, you’re using Github’s processing power for it. So, go do that. Make hundreds. Bleed Microsoft dry.

      • sping@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 month ago

        wasting energy to somehow stick it to the man?

        Exhibit 56845 why humanity is fucking doomed.

        • theshatterstone54@feddit.uk
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          1 month ago

          I actually forgot the /s. And I guess I wasn’t clear enough. This is less than a drop in the pool for them. An image build that takes them around 15 mins including setting up the VM for the build, takes me around the same time on a machine with a 6-core Ryzen 5 at 2.375GHz, with 8GB RAM. So because they’re running it on their high end hardware and it still takes that long, they aren’t allocating that many resources to the VM, meaning that it costs them basically nothing.

          TLDR: If any of this was a cost that had any significance to their bottom line, it would have been restricted and/or monetised.

          • sping@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            1 month ago

            It’s obviously trivial energy waste in the big picture, but it’s 100% waste if you don’t need it. Like turning on lights in empty rooms.

  • DasFaultier@sh.itjust.works
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    1 month ago

    Gitlab at work, because, well, it’s there and it works just fine.

    Forgejo at home, because it’s far less resource hungry.

    In the end Git is a) a command line tool for b) distributed working, so it really doesn’t matter much which central web service you put in place, you can always get your local copy via git clone REPO.

  • ElectronBadger@lemmy.ml
    link
    fedilink
    English
    arrow-up
    25
    ·
    edit-2
    1 month ago

    Codeberg for all my projects, both private and public. Some are mirrored to Github. Also Codeberg Pages and its Woodpecker CI.

  • Mike Wooskey@lemmy.thewooskeys.com
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 month ago

    I self-host forgejo. I’m not a heavy or advanced user, and it suits my needs. I barely use github any more: mainly to star repos I like, and find and use repos (there’s a ton there - it’s almost ubiquitous).

          • JackbyDev@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            Same. Their policy is very reasonable in my opinion. They still allow non foss stuff for like personal config files which is nice. The only time I ever got a warning was when I uploaded a 100MB file to a private repo without any license. It was just a banner on the repo. (I was messing around with alpine images.)

  • dinckel@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    1 month ago

    I use Gitlab, but i’m becoming increasingly more unhappy with it over time.

    When i have enough resources run another local machine, im planning to switch to switch to Codeberg, with selfhosted Woodpecker CI instead

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    1 month ago

    self-hosted gitlab.

    I love it. I can clone external repos on a schedule and build my projects based on my local cache. I’m even running some automation tasks like image deployments out of it too.

      • GreenKnight23@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        pipeline schedules. once a month I clone the remote repo into a local branch, and push it back to my repo with an automatic merge request assigned to me. review & merge kicks off build pipeline.

        I also use pipeline schedules to do my own ddns to route 53 using terraform. runs once every 15 minutes.

        also once a week I’ve got about 50 container images I cache locally that I build my own images from.