Local Development Environment Using Minikube
Django applications usually require a database to store data, an in-memory cache, storage and sometimes they connect to other applications to perform some tasks. In this entry we’re going to setup a local development environment using Minikube for developing django applications.
Install Minikube
Minikube is a tool that allows us to run a kubernetes cluster locally. It is very easy to install, as it comes as a single binary file that can be downloaded and put in your workstation’s path.
The official documentation can be found here, but in linux is very easy to install:
curl -Lo ~/.local/bin/minikube \
https://github.com/kubernetes/minikube/releases/latest/download/minikube-linux-amd64
&& chmod 750 ~/.local/bin/minikube
As I’m the only user of my workstation, I like to put this kind of executables under my home directory, so they can be easily backend up or kept when I upgrade my computer.
Create a cluster
Once the executable is downloaded and available in the path, we can start the cluster. In this case I set it up to use all the cpus and memory available in my computer, also enabled some addons.
minikube start --memory=max --cpus=no-limit --addons ingress,ingress-dns,registry,yankd
Create a database server
As the idea is to have a local development environment that can host several projects, using the postgresql operator from zalando could help us to setup databases, no matter if we prefer using a big instance to host them all, or we want to create an instance dedicated to each project. For my local environment, I’m using the former approach, so I’ll create a postgresql instance in the postgres namespace for all the projects.
To install PostgreSQL operator we have to add the helm repository and install the helm chart as describe in the official documentation.
helm repo add postgres-operator-charts https://opensource.zalando.com/postgres-operator/charts/postgres-operator
helm install postgres-operator postgres-operator-charts/postgres-operator -n postgres-operator --create-namespace
Once the operator is deployed and running, we have to create a postgresql database instance. We can do this with with the following manifest:
---
apiVersion: v1
kind: Namespace
metadata:
name: postgres
---
kind: "postgresql"
apiVersion: "acid.zalan.do/v1"
metadata:
name: "database"
namespace: "postgres"
labels:
team: acid
spec:
teamId: "acid"
postgresql:
version: "17"
numberOfInstances: 1
maintenanceWindows:
volume:
size: "10Gi"
users:
dbuser: []
databases:
myproject: dbuser
allowedSourceRanges:
# IP ranges to access your cluster go here
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 500Mi
Store it on a file and apply it using kubectl:
kubectl apply -f postgresql-database.yaml
It will create a pod in the postgres namespace, among other things, a service and a persistent volume claim. In this case is a very simple example with no replication or bouncer enabled.
kubectl -n postgres get pods,pvc,services,postgresql
NAME READY STATUS RESTARTS AGE
pod/database-0 1/1 Running 0 8m29s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/pgdata-database-0 Bound pvc-ca6d88fd-2e03-45fc-99dc-44819c6883e2 10Gi RWO standard <unset> 8m29s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/database ClusterIP 10.96.214.131 <none> 5432/TCP 8m29s
service/database-config ClusterIP None <none> <none> 8m23s
service/database-repl ClusterIP 10.109.120.11 <none> 5432/TCP 8m29s
NAME TEAM VERSION PODS VOLUME CPU-REQUEST MEMORY-REQUEST AGE STATUS
postgresql.acid.zalan.do/database acid 17 1 10Gi 100m 100Mi 8m29s Running
Once the pod was running, I ran a shell into it to access the postgresql service.
kubectl exec -it -n postgres database-0 -- psql -U postgres
Then I used SQL queries to create the user and the database:
CREATE USER djangouser WITH ENCRYPTED PASSWORD 'changeme1234';
ALTER ROLE djangouser WITH login createdb;
CREATE DATABASE djangoproject WITH OWNER djangouser;
Optionally, the operator dashboard can be installed as follows, it can be used to create databases.
helm repo add postgres-operator-ui-charts https://opensource.zalando.com/postgres-operator/charts/postgres-operator-ui
helm install postgres-operator-ui postgres-operator-ui-charts/postgres-operator-ui -n postgres-operator
Defining the development environment
Developing application stacks in a kubernetes environment is completely different from developing them using docker-compose.yaml files, mainly because it requires building the images and pushing them to a registry.
To simplify this process, we can use the tool skaffold developed by Google, which allows us to build and deploy our application in a kubernetes cluster even using helm charts and synchronizing files with the containers without having to build the images and recreating the pods for each change, leveraging the power of the django’s live reload feature.
Skaffold behavior is defined in a file called skaffold.yaml
and it can be
used to build, deploy and synchronize files with the containers. In this case,
the skaffold.yaml
is like this:
apiVersion: skaffold/v4beta13
kind: Config
metadata:
name: local-development
build:
local:
push: false
artifacts:
- image: djangoproject
context: django_project
docker:
dockerfile: Dockerfile
sync:
infer:
- 'src/**/*.py'
deploy:
helm:
releases:
- name: django-project-chart
namespace: django-project
chartPath: ./django-project-chart
valuesFiles:
- local-values.yaml
setValueTemplates:
image.repository: "{{.IMAGE_REPO_djangoproject}}"
image.tag: "{{.IMAGE_TAG_djangoproject}}@{{.IMAGE_DIGEST_djangoproject}}"
portForward:
- resourceType: service
resourceName: django-project-chart
port: 8000
localPort: 8000
namespace: django-project
This example uses a very small part of the skaffold configuration, please refer to the official documentation for a complete reference.
Let’s review each part of the configuration in the following sections.
Build section
build:
local:
push: false
artifacts:
- image: djangoproject
context: django_project
docker:
dockerfile: Dockerfile
sync:
infer:
- 'src/**/*.py'
build
defines the build process, in this case we’re using the local
builder and we’re not pushing the images to a registry.
build.artifacts
defines the artifacts to build, in this case we’re building a
docker image from the Dockerfile in the django_project directory. In a more
complex scenario, we could build multiple artifacts. It’s very similar to the
docker-compose.yaml file, where we define the context for building the image,
the dockerfile and the the name of the image to build.
build.artifacts[].sync
defines the files to synchronize with the containers, in this case
we’re synchronizing all the python files in the src directory. So no rebuilding
is required when we change a python file. The method we use here is
infer
, which means that the files to synchronize are inferred from the
Dockerfile.
Deploy section
deploy:
helm:
releases:
- name: django-project-chart
namespace: django-project
chartPath: ./django-project-chart
valuesFiles:
- local-values.yaml
setValueTemplates:
image.repository: "{{.IMAGE_REPO_djangoproject}}"
image.tag: "{{.IMAGE_TAG_djangoproject}}@{{.IMAGE_DIGEST_djangoproject}}"
deploy
defines the deployment process, in this case we’re using helm to
deploy a local chart in the django_project_chart directory, but we can also use
remote charts.
deploy.helm.releases[].namespace
defines the namespace where the chart will be
deployed, in this case we’re deploying the chart in the django-project namespace.
deploy.helm.releases[].chartPath
defines the path to the chart to deploy, in
this case we’re using the local chart in the django_project_chart
directory.
deploy.helm.releases[].valuesFiles
defines the values files to use when
deploying the chart as we’d do with the -f parameter when using helm directly,
if several values files are specified, they are merged in the order they are
specified, in this case we’re using the local-values.yaml file.
deploy.helm.releases[].setValueTemplates
defines the values to use when
deploying the chart, in this case we’re using the IMAGE_REPO_djangoproject
,
IMAGE_TAG_djangoproject
and IMAGE_DIGEST_djangoproject
environment variables
to set the image repository, tag and digest. Skaffold will replace these
variables with the corresponding values of the built artifacts.
PortForward section
portForward
defines the port forwarding process, in this case we’re
forwarding the port 8000 of the django-project-chart service to the local. This
part is a bit complicated because you need to specify the name that the service
will get when the chart is deployed, and in most cases, it’s generated using a
template.
Deploy application in development mode
Once our skaffold.yaml
file is ready, the first thing is to make skaffold
aware of our minikube environment, to do so, execute the following command:
eval $(minikube docker-env)
This will create the necessary environment variables to use the local docker daemon. Now we can start the django app in development mode by running:
skaffold dev
Then skaffold will build the images, deploy the helm chart and start the port forwarding.
Generating tags... - djangoproject -> djangoproject:cb14850-dirty Checking cache... - djangoproject: Not found. Building Starting build... Found [minikube] context, using local docker daemon. Building [djangoproject]... Target platforms: [linux/amd64] Sending build context to Docker daemon 47.62kB Step 1/8 : FROM python:3.12-slim ---> acf8897bf01a Step 2/8 : WORKDIR /app ---> Using cache ---> 09592fac9291 Step 3/8 : RUN apt update && apt install -y --no-install-recommends apache2-dev ---> Using cache ---> 4c360f23b9ad Step 4/8 : COPY pyproject.toml poetry.lock ./ ---> Using cache ---> 0ae022b3d66e Step 5/8 : RUN pip install poetry && poetry config virtualenvs.create false && poetry install ---> Using cache ---> b302031428d6 Step 6/8 : COPY src/ ./ ---> 68e30af18505 Step 7/8 : EXPOSE 8000 ---> Running in 1ba8d6fda51e ---> a110f0ec3065 Step 8/8 : CMD ["scripts/start.sh"] ---> Running in dd03a55a1a04 ---> 0c44728760d6 Successfully built 0c44728760d6 Successfully tagged djangoproject:cb14850-dirty Build [djangoproject] succeeded Tags used in deployment: - djangoproject -> djangoproject:0c44728760d64c229a0df8505b832e4daa35343871f9b753e1523dc8478cc023 Starting deploy... Helm release django-project-chart not installed. Installing... NAME: django-project-chart LAST DEPLOYED: Sun Jun 1 17:02:42 2025 NAMESPACE: django-project STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Get the application URL by running these commands: export NODE_PORT=$(kubectl get --namespace django-project -o jsonpath="{.spec.ports[0].nodePort}" services django-project-chart) export NODE_IP=$(kubectl get nodes --namespace django-project -o jsonpath="{.items[0].status.addresses[0].address}") echo http://$NODE_IP:$NODE_PORT Waiting for deployments to stabilize... - django-project:deployment/django-project-chart: Readiness probe failed: Get "http://10.244.0.54:8000/ht/?format=json": dial tcp 10.244.0.54:8000: connect: connection refused - django-project:pod/django-project-chart-75669b9df9-gnw6q: Readiness probe failed: Get "http://10.244.0.54:8000/ht/?format=json": dial tcp 10.244.0.54:8000: connect: connection refused - django-project:deployment/django-project-chart is ready. Deployments stabilized in 13.101 seconds Port forwarding service/django-project-chart in namespace django-project, remote port 8000 -> http://127.0.0.1:8000 Listing files to watch... - djangoproject Press Ctrl+C to exit Watching for changes...
If we change a python file, skaffold will synchronize the file with the container and the django app will be reloaded without having to rebuild the image, upgrade the chart and recreate the containers. If we change a python file in the src directory, we’ll see the following output:
Syncing 1 files for djangoproject:0c44728760d64c229a0df8505b832e4daa35343871f9b753e1523dc8478cc023 WARN[3319] no running pods found in namespace "default" subtask=-1 task=DevLoop Watching for changes... [django-project-chart] Performing system checks... [django-project-chart] [django-project-chart] System check identified no issues (0 silenced). [django-project-chart] June 01, 2025 - 15:49:43 [django-project-chart] Django version 5.2.1, using settings 'core.settings' [django-project-chart] Starting development server at http://0.0.0.0:8000/ [django-project-chart] Quit the server with CONTROL-C. [django-project-chart] [django-project-chart] WARNING: This is a development server. Do not use it in a production setting. Use a production WSGI or ASGI server instead. [django-project-chart] For more information on production servers see: https://docs.djangoproject.com/en/5.2/howto/deployment/
But if we change a different file, for instance the startup script, the container image will be rebuild and the helm chart upgraded.
Generating tags... - djangoproject -> djangoproject:cb14850-dirty Checking cache... - djangoproject: Not found. Building Starting build... Found [minikube] context, using local docker daemon. Building [djangoproject]... Target platforms: [linux/amd64] Sending build context to Docker daemon 47.62kB Step 1/8 : FROM python:3.12-slim ---> acf8897bf01a Step 2/8 : WORKDIR /app ---> Using cache ---> 09592fac9291 Step 3/8 : RUN apt update && apt install -y --no-install-recommends apache2-dev ---> Using cache ---> 4c360f23b9ad Step 4/8 : COPY pyproject.toml poetry.lock ./ ---> Using cache ---> 0ae022b3d66e Step 5/8 : RUN pip install poetry && poetry config virtualenvs.create false && poetry install ---> Using cache ---> b302031428d6 Step 6/8 : COPY src/ ./ ---> 96e41a99d0de Step 7/8 : EXPOSE 8000 ---> Running in ad5c84756851 ---> 04462fde95d2 Step 8/8 : CMD ["scripts/start.sh"] ---> Running in 68dece0148dc ---> 06906255c423 Successfully built 06906255c423 Successfully tagged djangoproject:cb14850-dirty Build [djangoproject] succeeded Tags used in deployment: - djangoproject -> djangoproject:06906255c423744ff7ab16c95158ad94260eedf50b53b2d17eb5cbd37557c437 Starting deploy... Release "django-project-chart" has been upgraded. Happy Helming! NAME: django-project-chart LAST DEPLOYED: Sun Jun 1 18:00:50 2025 NAMESPACE: django-project STATUS: deployed REVISION: 3 TEST SUITE: None NOTES: 1. Get the application URL by running these commands: export NODE_PORT=$(kubectl get --namespace django-project -o jsonpath="{.spec.ports[0].nodePort}" services django-project-chart) export NODE_IP=$(kubectl get nodes --namespace django-project -o jsonpath="{.items[0].status.addresses[0].address}") echo http://$NODE_IP:$NODE_PORT Waiting for deployments to stabilize... Deployments stabilized in 11.58064ms
To access the application we can use the forwarded port:
curl http://localhost:8000/api/hello/
{"message":"Hello, world! (with autoreload)"}
Considerations
In this example, the django application was configured to start on development
mode using django runserver
which is not suitable for production. In a real
production environment, we should use a production-ready web server like gunicorn
or uwsgi.
Also, in our values file we set the DB_PASSWORD environment variable from a secret, for the app to work, it has to be manually created in the namespace declaratively from the django-secret-example.yaml file using
kubectl -n django-project apply -f django-secret-example.yaml
or imperatively using:
kubectl -n django-project create secret generic django-secret \
--from-literal DB_PASSWORD=changeme1234
Stop the cluster
To stop the cluster, we should stop skaffold first pressing Ctrl + C
, then, we
can stop the minikube cluster running:
minikube stop
This will stop the minikube cluster without deleting it, so we can continue our work from where we left it.
Start the cluster
First, start the minkube cluster:
minikube start
Then we have to make sure that the docker related environment variables are in place before running skaffold:
eval $(minikube docker-env)
The last step is to launch our app in dev mode to continue working:
skaffold dev
If no changes are made to the code, the application will started using the image already present on the cluster without having to rebuild the image.
Destroy the cluster
If we want to destroy the minikube cluster because it’s not needed anymore or we want to start again from scratch, run the following command after stopping the cluster.
minikube delete
Conclusion
Using skaffold and minikube we can develop locally and deploy our application in a kubernetes cluster in a very similar way as we would do in production, allowing us to test not only our application but also the helm chart and the image build process.
Skaffold files are shareable, so they can be stored in a version control system and shared among several developers, taking care only of the sensitive data, exactly as we do with docker files.
References
-
Minikube
-
Skaffold
-
Zalando Postgres Operator
https://github.com/zalando/postgres-operator/blob/master/docs/quickstart.md
-
Files in my homelab repo
https://github.com/juanjo-vlc/homelab/tree/main/local-development