Default Software Development Choices for 2023
Updated:
There are few absolutes in software. Also: Hard, fast and slightly wrong is typically better than being overwhelmed with choice. In this post, I will explore some default choices that I consider to be the best for most projects. I would deviate from any of these if there was a reason specific to the project or the group of people I was making it with or for.
Each of these could and probably should get a kick in the teeth for the flippancy in which I'm stating them absolutely. And. I believe each of them could survive a metaphorical kick in the teeth and still stand as at least a good enough option.
Key assumption: We're talking about either a new project at an existing organization, or a collaborative effort that you're hoping to productize and build an organization around. For personal projects, I don't have any advice other than to have fun with it.
- Monolith in a mono-repo. It's easier to break up the monolith and the monorepo than it is to put them together from disparate parts. Also, you'll be able to move faster in the early days when speed is most valuable.
- Trunk-based development with continuous integration. You'll move quicker and avoid bottlenecks.
- Make every change using a PR or MR so CI runs every time and you get in the habit of ELI5ing changes to each other.
- Per-PR staging builds: Deploy every change somewhere to make it easy to collaborate. Don't forget to tear it down to save $$.
- Continuous Deployments: Ship trunk to prod on every commit. This is really expensive to change later, and ridiculously valuable from a speed, change management and developer happiness perspective. Haven't read Nicole Forsgren et al's Accelerate yet? Great book.
- Everything is software. Including infrastructure and data pipelines. Infra: Terraform is good. CloudFormation and Cloud Deployment Manager are fine, because you probably won't migrate between clouds. Data: I was introduced to dbt this year by our data engineering team. It's a fantastic tool, mostly because it precipitates the big change in data engineering from value being delivered one F5 at a time to value being delivered as testable, runnable software.
- 12 factor solves so many small problems, and a few big ones, too.
- As soon as you can, start measuring as objectively as possible the following, in this order of importance: Observability, security, complexity, change management (including but not limited to test coverage), developer experience, automation, resiliency, accessibility, performance. Yes, observability is first. You can't measure or improve what you can't see. Yes, performance is last. Performance usually only matters when it's off by an order of magnitude. Otherwise it is mostly an implementation detail of the above, especially complexity and resiliency. And... optimizing for performance often (usually?) means increased complexity. Caching layers, hard to grok code, etc.
- Auth0.
- Prefer build-time integrations to run-time integrations. Only fall back on run-time migrations when you have to: a sidecar for running tasks so your web server doesn't tip over when that big on-demand report is run, for example. Micro-services, iframe integrations etc are powerful patterns that are also very expensive from multiple perspectives. Use them only when you're forced to by either workload or human scale. And even then, explore creative solutions first, such as shifting intensive workloads off-hours.
- For UI: Make a best guess by sourcing design principles and a palette from a best-in-class experience, and choose a UI library appropriate to them. Don't make any big changes to the design system until the project is ridiculously successful.
- Process: Pick some combination of scrum, Kanban, agile that works for you. Most important is that your process includes open divergence that gives everyone an opportunity to weigh in on what might be done, followed by convergence with a clear DACI describing who owns and is accountable for converging on a decision.