--- title = "How to Ensure your Software Project Fails" date = 2024-10-25T18:00:00-05:00 --- I can't tell you how to ensure your software project succeeds, but I can tell you several ways to make it fail. --- This is more for myself as a postmortem for the project I've been doing at the dayjob for the past six months. Now that it's confirmed that I'm rolling off next week, I want a record of the sorts of decisions that I need to push back harder against - for my own sake, since I like to avoid feeling doom and dissatisfaction for 40 hours a week. The project is relatively simple-sounding data transformation. Take stuff in from three or four legacy systems, transform it, send it to the new system. Nothing I haven't done before, but there are some very clever ways to make it fail: 1. Provide no schemas, only an inaccurate mockup of what the data looks like. Make your developers work off of actual data and a black box validator on the receiving end. This by itself killed the project. We started with four developers, and frankly that would have been enough to get it ready to test in about two months if we'd had the proper tools for the job. Hell, for nine data types, I could probably have gotten it ready in those two months while working alone - some of them are a lot more complicated than the first project I did in my career (which took me a week and a half for one simple document), but I'm also a significantly more experienced developer now. Instead, the project ballooned to about 20 people tracking down tiny data type mismatches, and it's been six months. This is at least a $2 million mistake. 2. Provide a single architect as point of contact, with no direct communication to the teams on either side of the project or QA. Have him drip-feed requirements in a big table on Confluence as he learns about them. As I mentioned above, the service we were publishing to was effectively a black box. The only way we actually discovered what it would accept was by sending things to it - never has anyone on my team seen the source code or talked to one of the developers for it. Everything went through the middleman. To make matters worse, QA (once we got them) was working off of a different checklist entirely. There were some requirements we'd never even heard of until we got a report back from QA that our data wasn't meeting some validation they were doing. The only guy we had to talk to didn't have all the information, so how on earth were we supposed to succeed? 3. Don't provide the tools the team needs to succeed. We didn't have access to any of the client's systems for about three weeks. Now, we had been given a few example files to work off of and I know how to use `git format-patch` and `git apply` to collaborate without a centralized repo, so this wasn't the end of the world, but most teams wouldn't be able to effectively manage that situation. What's more concerning is that it took another month after that for us to actually get a preprod environment to run our code in at all, and longer to get a second one for testing. The third preprod environment in the client's deployment process still isn't up. I don't know how the hell they thought they'd have something in production three months ago. 4. Make the team work with technology they're unfamiliar with, that's unsuited for the job. I couldn't tell you if MongoDB is completely unfit for purpose, but I can definitely tell you that it's a bad choice when you're trying to be strict with data types and nobody on the team knows how to work with it. It took me two days to figure out experimentally how to write what in SQL would be a SELECT with a single JOIN and two WHERE clauses. 5. Organize the code poorly. I'll accept responsibility for this one. I was adamant at the beginning of the project that we have a package structure that organized our code by use case, rather than the fallback nothingness of "controller" "service" "repository" that mixes unrelated code together. If you're familiar with [Martin Sandin's article](https://medium.com/@msandin/strategies-for-organizing-code-2c9d690b6f33), that'd be "by component" instead of "by kind". In the past, I've restructured some microservices this way and seen things get done twice as fast on them once everything isn't so jumbled up. However, I didn't have a strong opinion on whether we should organize by document and then by process step or vice versa, so we went with document type as the top level. This was a mistake - our "shared" package is the largest one, and inside of it is a mix of "by component" (logging, ingestion, publishing) and "by kind" (service, validator, util), which clearly indicates something is wrong. Being consistently bad is better than being inconsistently bad *and* a mess. Since I'm thinking about it, I'll cheat a bit and throw in one from a project I was on two years ago: 6. Make it intimidating to ask questions. I was thrown into a team doing some management and reporting tools for a chatbot (traditional natural language processing and canned responses, this was before the LLM hype cycle), with basically no onboarding process. The other three devs had been on the project for years and knew all they needed to get the job done - I would have been the only person asking questions in the team chat. I did not do well. My stories constantly slipped past deadlines and I felt the worst about my work that I ever have. This was the second bad performance review I've gotten in my career (the first was as an Amazon intern on a team that provided literally no support for me - my manager didn't even schedule the biweekly 1 on 1s that he was supposed to), and I had to ask to be moved off the team. Amusingly, I had the opportunity to talk with someone from that team who's a manager now. They replaced me with two other devs who, combined, are performing even worse than I was. Please, if you're reading this and you're responsible for any software someone else might touch, go and write some documentation for it right now. Anything will help.