On Being Against AI

We have had to repeatedly state our opinions on AI, but have not sat down to write a whole thinkpiece on it. Until then, however, have this message we sent as part of a conversation mid-2025.

The primary and most important reason why I refuse to engage with AI is that I place a lot of self worth and merit on my ability to do things for myself. AI to me feels like having someone else solve a puzzle instead of learning how to do it yourself. “But what about StackOverflow?” StackOverflow, and documentation in general, can’t solve the puzzle for you. It can help you understand pieces of the puzzle, give you examples of solving different similar puzzles, but ultimately doesn’t give you the solution to your specific puzzle.

If I can’t do something myself, then I want to engage with people who can do it for themselves, to learn about how they do it and understand them better, even if I don’t intend on diving head-first into doing it myself. Similarly, I’d much rather commission a cool artist to draw something than have an AI draw it. I can talk with and learn about their process, and I can contribute to an artist being able to earn a living for their work, I participate in the omnipresent exchange of goods and services that I am forced to live under with a human instead of a faceless literal machine, the product of a faceless metaphorical machine of a corporation.

Further, from my own experience seeing people working with AI and from some of the studies coming out recently, it’s very clear that relying on AI ends up teaching you to think less, which scares the heck out of me. I’ve watched a lot of videos by TheraminTrees, and one of the most persistent phrases I use from him is “People who don’t want you to think are never your friend.” This is true of religious cults, of fanatic groups, of political parties, of anything and anyone, and it is true of AI. AI wants me to place more of my thinking onto it instead of thinking for myself, and that state of the world is fundamentally against my core values.

I participate in open source because I want people to learn from my work, to understand programming better, and to contribute their own ideas into the space. At my core I do my work because I want to improve myself and others, In my thinking and theirs, And to increase people’s understanding of the world and my craft. This is central to why Academia is my goal and why I want to be a professor.

AI is diametrically opposed to this.

Policies

  • We do not use AI period. Whether for generating text, code, or images.1
  • While we will not berate you, as that would be rude, we do think you using AI means you are less skilled, and look down upon work you complete with the help of AI.
  • If you created something entirely with AI, we do not want to use it, and will take measures to actively avoid it.
  • If you revert AI-made commits, and don’t include further AI changes, then that’s awesome. However where possible, avoid using AI in the first place.
  • While we do not think it is impossible for an LLM to be a useful piece of technology, we do think the vast, vast majority of uses for them are bad, and if you include one in your project, we will likely try to avoid it.

  1. We have used an AI (ChatGPT specifically) exactly once, against our will, because an assignment instructed us to do so directly. We hated it, and when we accidentally lost its output, we made up our own output with fleshy intelligence instead.↩︎