10 Workspace Guidelines for a Superior Developer Experience

M.hosein abbasi
8 min readFeb 1, 2022

--

The following guidelines are fairly opinionated and are based on the experience of the authors of this book. We recommend that you use these guidelines as a starting point.

After you have had experience building services with our guidelines, we expect that you may consider modifying some of them to better fit your individual needs and experiences:

1. Make Docker the only dependency.

The “works for me” syndrome plagues many developer teams. It’s essential that anybody be able to easily create the same environment. As such, elaborate, manual setups should be banned. We live in the era of containerization and teams should leverage it. To set up code, we should only expect to see the Docker runtime and Docker Compose on the host machine — nothing else! It should not matter if the machine is running Windows, macOS, or Linux and what libraries are present. Such assumptions are exactly what lead to broken setups. For instance, there should be no set expectations about a specific version of Python, Go, Java, etc., being present on the developer’s machine. Setup instructions must be automated, not codified in READ.ME files.

2. Remote or local should not matter.

Setup should work regardless of whether a developer runs code on their own laptop or on a cloud server via an IDE’s remote development/SFTP plug-ins. This should hold true by default and if there is a case in which this cannot be done, a cause for an exception must be justified and documented

3. Ensure a heterogeneous-ready workspace.

A good setup should accommodate multiple microservices written in multiple programming languages, using multiple data storage systems. A microservices architecture assumes the ability to combine heterogeneous microservices; it doesn’t mean just putting one codebase in one container or standardizing on one technology stack. Too often we see “[some-language] microservices framework” in marketing materials. Well, guess what — if 100% of your microservices are written in Java, there is something wrong with the setup, and no, you don’t get to chuckle if all your services are written in a “cool” language like Go.

Now, for the record: this does not in any way mean that in a well-managed microservices environment you should see every team picking whatever language and databases they feel like and going for it. Quite the opposite: when uncertain, definitely try to exercise caution and go with two, at most three, stacks. The point here is that you should be able to introduce a new stack if you genuinely needed it, so in your example setup, you have to show that you actually can, that is, by implementing more than one stack.

The Rule of Twos

We have found proactively practicing heterogeneity in a microservices setup to be a great approach. For any critical component in your system, make sure that you are using at least two alternatives in production at the same time — even when you only need one. You should also make sure that you have an infrastructure to support the two alternatives as easily as you would use a single one. We call this approach the “Rule of Twos”.

Say that most of your APIs are written in Node.js — a truly wonderful, I/O optimized stack for writing APIs. See if some of them could be implemented in Go, Java, Rust, etc., maybe because they do something more CPU-bound, which Node is not great at. While you practice heterogeneity, however, do make sure that you limit the selection of your programming languages and database systems across the entire application to two or three. Otherwise, you can run a high risk of confusing your teams with too much choice and creating serious maintenance overheads.

4. Running a single microservice and/or a subsystem of several ones should be equally easy

Let’s say an airlines reservation system is implemented as three microservices. A developer should be able to check out any particular microservice individually and work on it, or check out an entire subsystem of interacting microservices (the reservation system implementation) and work on that. Both of these tasks should be very easy.

5. Run databases locally, if possible.

For the sake of isolation, for any database system’s local, Docker-ized alternatives should be provided, and it should be trivial to switch over to cloud (e.g., AWS) services via a configuration change. As an example, MinIO can act locally as a drop-in replacement for S3. Many AWS service alternatives can be installed via this GitHub site.

6. Implement containerization guidelines.

Not all containerization approaches are equal. Anybody can haphazardly stick code into a Docker container, but making a containerized coding environment developer-friendly takes more effort. Following are some principles that we have found essential:

a. Even though the code runtime is containerized, developers must be able to edit code on a host machine (e.g., their laptop, an EC2 dev server), with any code editor. However, during execution, a full run/test/debug should be executed in a container.

b. Since Docker Compose can generally do anything a Dockerfile can, they can easily be confused by developers. As such, it is important to establish the difference between the two. We recommend the following formula: Use a Dockerfile for building a container image, and Docker Compose for running things locally, including complex integrations. An image built with a Dockerfile should be directly runnable on Kubernetes, AWS ECR, Swarm, or any other production-grade runtime. Please note that just because it can be doesn’t mean the local/dev image will always necessarily be the same as the one running in production. Teams do often optimize the former for usability and the latter for security and performance. A good example of this approach is the usage of multistage builds.

c. Multistage builds must be utilized in Dockerfiles to accommodate usage of slim images in production and usage of more full-featured images for local development.

d. Developer user experience is critical. Implementing hot-reloading of the code and/or the ability to connect a debugger out of the box is an important feature.

7. Establish rules for painless database migrations.

It is extremely important to manage databases and the data in them in a way that supports and enhances team collaboration. Changes to data schemas must be codified and applied without any manual steps. The following list of principles facilitates painless data management in a microservices environment:

a. Any and all changes to a database schema must be codified in a series of “database migration” scripts. Migration files should be named and ordered by date.

b. Database migrations should support both schema changes as well as sample data insertion.

c. Running database migrations should be part of the project launch (via Make a start, see the next section) and must be enforced.

d. Running database migrations must be automated and should be part of any build (integration, feature branch builds for PR, etc.).

e. It should be possible to indicate which migrations run on which environments (or which ones can be skipped), so that migrations that deal with sample data creation can be skipped in production, for example.

f. These rules apply to all data storage systems: relational, columnar, NoSQL, and so forth.

g. Some examples:

a. Flyway hosts this introduction to database migrations

b. See this blog post by Daniel Miranda et al. about database migrations for Cassandra

c. Check out this example of using Node’s DB-migrate-SQL for a MySQL database

8. Determine a pragmatic automated testing practice.

Automated testing is a complex subject. We have certainly seen both extremes of the spectrum: some teams giving up entirely on automated testing, and others being overzealous on test-driven development to the extent of it becoming a problem. We advocate for a measured, pragmatic approach to automated testing, one that balances developer experience with quality metrics and accommodates the different personal preferences of various developers on the team.

a. Test-first, test-as-you-code, or test-after-code should all be acceptable practices as long as all code is covered with a reasonable amount of meaningful tests before it is merged with the main branch.

b. Teams should use a testing approach and frameworks that are idiomatic for the platform/stack in which code is being developed (e.g., JUnit for Java). The codebase of the same stack (e.g., Go, Java, etc.) should use a uniform approach and various microservices in the same language should not be doing different things based on who wrote them and when.

c. Using external tools, especially for acceptance or performance testing, is fine with proper justification, given an important caveat: these tools (e.g., Cucumber) must be fully integrated into the code/repository of the service itself, and using and running them must be as easy as a native solution. An average developer of the service should not need to set anything up to get things going and should be able to easily run tests with a command like make test-all.

d. Special attention and care should be given to automated tests that span the boundaries of individual microservices. They will have to be applied either at a higher level (e.g., an API that invokes microservices, or a UI), or in some cases, a dedicated repository may need to be set up to house testing orchestration and automation for such tests.

e. Code linting/static analysis tooling should be set up and a consistent configuration for the linter must be adapted for the organization’s style.

9. Branching and merging.

Virtually everyone these days uses some form of code version control system. While the basics of version control–driven development are well-understood, it’s worth reminding ourselves of some core principles of good branching hygiene that all team members should observe for a happy collaboration:

a. All development should happen on feature and bug branches.

b. Merging of a branch to the main branch should not be allowed without all tests (including integration tests in a temporary integration cluster spun up for the branch) passing on that branch.

c. The status of the test runs (after each commit/push) must be readily visible for code reviewers during pull requests.

d. Linting/static analysis errors should prevent code from being pushed to a branch, and/or merged into the main branch.

10. Common targets should be codified in a makefile.

Every code repository (and generally there should be one repository per micro-service) should have a makefile that makes it easy for anybody to work with the code, regardless of the programming language stack used. This makefile should have standard targets, so that no matter what codebase, in whatever language the developer clones, they should know that by running make a run they can bring that codebase up, and by running make test they can run automated tests.

We recommend defining and implementing the following standard targets for your microservice makefiles:

  • start: Run the code.
  • stop: Stop the code.
  • build: Build the code (typically a container image).
  • clean: Clean all caches and run from scratch.
  • add-module
  • remove-module
  • dependencies: Ensure all modules declared in dependency management are installed.
  • test: Run all tests and produce a coverage report. • tests-unit: Run only unit tests.
  • tests-at: Run only acceptance tests.
  • lint: Run a linter to ensure conformance of coding style with defined standards.
  • migrate : Run database migrations.
  • add-migration: Create a new database migration.
  • logs: Show logs (from within the container).
  • exec: Execute a custom command inside the code’s container.

This was part of my knowledge of reading the Book “Microservices up and Running.”

--

--