blog · git · desktop · images · contact


A decentralized web is good: All projects have moved

2018-06-11

All of my software projects have been moved to this server right here. All further development will happen here, new releases will be published here.

Here’s the list of projects:

Please consider all other sources as obsolete. This is upstream now.

Each project now has its own project page, which can be reached by visiting /git/$project/, for example:

At the top of those pages, there is a git clone ... line, which tells you how to clone the repository in question to your machine.

Also note that there is an Atom feed for each project. If your browser does not indicate feeds by default, you can view them manually by fetching /git/$project/atom.xml, for example:

Each project’s README contains this link:

https://www.uninformativ.de/bugs.html

It very briefly tells you how to report bugs or send patches. Don’t worry, it’s not complicated: Just send me an e-mail.

Why did this happen?

Because I can. The real question is: Why didn’t this happen earlier? I have been wanting to move all that stuff to my own server for years, I’ve just been too lazy to do it.

I strongly believe that a decentralized web is a good thing. If you have the means to run your own server, do it. You then get maximum control over which software you run and over what happens to your data. Of course, there are limits to everything – and data that you publish is public. But running your own server means being independent (to the greatest degree possible) and I have a hard time finding arguments against that.

Why would you want to depend on yet another big company? What’s the benefit?

As I said, I was lazy, and I guess that’s the point. Using a service provided by someone else usually means less work for you. Granted. Or if you run a really big project, you can benefit from the knowledge and experience of big hosters. Granted.

I am just one person, though, writing a little bit of software. It should be possible for me to do web hosting on my own. From a purely technical standpoint, I hardly benefit from using free hosting platforms.

When it comes to things like “attention” or “visibility”, then we’re leaving the scope of this blog post. Many aspects are involved here. I believe, for example, that a great deal of that visibility stems from the “followers” mechanism, but that’s a social issue, not a technical issue. You can “follow” people outside of large hosting platforms, too, for example via RSS/Atom feeds. Doesn’t really matter anyway, since I don’t want to make money with my software, so I hardly ever look at statistics and I don’t really care.

Back in the very dark ages, before I used version control systems, I did the same thing: The stuff that I made (programs, images, tutorials, …) was hosted on my own “server”. It wasn’t a “virtual private server” like today, though, it was just some web space that I could access via FTP. This was good enough, a long time ago. I uploaded tarballs. And there were a lot less projects than there are now. We’re talking about four or five little things that I put on my web space.

Then I got used to Subversion, Bazaar, Git. You just can not host those repositories in a meaningful and comfortable way, if you only have FTP access. It won’t work. Thus, I picked a good hosting service for Git repositories and uploaded my stuff there. This is how I ended up there – out of purely pragmatic reasons.

Today, I rent an actual server with root access. No more FTP-only and no more reason to use a hosting platform.

In an ideal world, I wouldn’t even need to rent a server, but I would be able to use a small device like a Raspberry Pi at home. That device has enough computing power for the little traffic that I get and the internet lines at home are fast enough, too. Due to that fancy thing called “dynamic IP addresses”, this approach doesn’t work for all services, though. Sadly.

Static content everywhere

While we’re on this topic, I’m going to describe my setup a little bit.

This blog is just a bunch of static HTML files. I don’t write HTML manually, I write Markdown. This is true for all the pages, not just for the blog posts. I then have a short shell script, which loops over all Markdown files and converts them to HTML using python-markdown. Some other shell script iterates over all blog posts and creates a Markdown file for the post index, which is then converted to HTML as well.

That’s the blog, basically.

The image files and desktop screenshots are just some JPEG files in a directory hierarchy. make-html-index creates the thumbnails and the index pages – again, static HTML.

Finally, the Git repositories are being made browsable by stagit, another wonderful tool by the suckless community. It doesn’t offer the same functionality as cgit or larger applications like Gitlab, but it’s perfect for showing the current contents and the history of a small repo. This is what I need. If you want to work on my code, just clone the repo.

The Git repos themselves can be cloned via HTTP anyway. I have a list of repos that I want to publish and then yet another shell script iterates over said list and essentially does this:

cd $webspace
mkdir $repo && cd $repo
git init --bare

cd $repo_source
for i in $public_branches
do
    git push --tags $webspace/$repo $i
done

cd $webspace/$repo
git update-server-info

Then rsync $webspace to the server.

Now, since all of that content is static HTML or static Git files, there are several advantages:

One of the “disadvantages” is that I can’t host one of the common bug trackers. But then again, my projects have so little traffic, I really don’t need one. Saves me more trouble.

Final words

I’m glad this task is done now. It was long overdue.

Maybe I’ll set up stagit-gopher, too, to make my repos available in Gopher space. I have been neglecting Gopher a little bit, lately …

Comments?