Category: Backend

How to build a dynamic Email template

How to build a dynamic Email template

Automated emails have come a long way in the past couple of years. What used to be a text-only email, today contains various forms, dynamic links, and images, depending each company on their design. Let’s build a dynamic email template in under 10 minutes!

Today, receiving formatted well designed stylish HTML emails has become a standard to most companies, which is why adopting this principle over regular text-only emails has become a must.

Developing HTML templates doesn’t require a lot of coding skills, however knowing how to code the template to appear correctly on all devices and old email clients is the real challenge.

In this blog post, I will go through a step by step guide on how to build a cross-platform-compatible dynamic email template via HTML, CSS, and PHP.

Basic guidelines

As described above, the biggest challenge with developing an HTML email template is making sure it’s cross-platform compatible. There are so many email clients such as Google Mail, Apple Mail, Outlook, AOL, Thunderbird, Yahoo!, Hotmail, Lotus Notes and etc. Some of these clients and others are light years behind the eight-ball in terms of CSS support, which means we must resort to using HTML tables to control the design layout if we really want the email template to display consistently for every user.

In fact, using HTML tables is the best way to achieve a layout that will render consistently across different email clients. Think of the template as being constructed of tables within tables within tables…

Secondly, we must use inline CSS to control elements within your emails, such as background colors and fonts. CSS style declarations should be very basic, without the use of any CSS files.

To emphasize the HTML tables rule above, see the example below, where I’ve modified the border attribute of each table to be visible. Please note that the %s is a placeholder where dynamic text and images will be filled as I’ll see soon describe (Scroll to the end to see the final email template):

As you can see above, the whole layout is built by HTML tables. We’ll be using PHP libraries to parse the %s placeholder and fill it with the dynamic text before an email is sent to the user.

Developing the static template

So let’s start programming!

Before we begin the template itself, you’ll need to begin your HTML file with an XHTML document:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Demystifying Email Design</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>

I recommend defining all tables with border=”1″ as seen above since it’s easier to spot errors and see the skeleton of the layout as you go along. At first, let’s create the basic layout:

<body style="margin: 0; padding: 0;">
 <table border="1" cellpadding="0" cellspacing="0" width="100%">
     My first email template!

Set the cell padding and cell spacing to zero to avoid any unexpected space in the table. Also set the width to 100% since this table acts as a true body tag for our email, because styling of the body tag isn’t fully supported.

Now we’ll add instead of the text ‘My first email template!’ another table which will present the actual email template display:

<table align="center" border="1" cellpadding="0" cellspacing="0" width="600" style="border-collapse: collapse;">
  This is the email template body

As you can see, the width is set to 600 pixels. 600 pixels is a safe maximum width for your emails to display correctly on most email clients. In addition, set the border-collapse property to collapse in order to make sure there are no unwanted spaces between the tables and borders.

In the example above, you can see that our email template consists of five sections (rows) which is why we’ll create these rows and then add tables accordingly to each in order to complete the template.

<table align="center" border="1" cellpadding="0" cellspacing="0" width="600">
   Row 1
   Row 5

At each row, we’ll create a new table in which the mythology is similar to the above. We’ll also add columns accordingly and the right padding to align all objects to reach the desired template.

To view the final HTML template, visit our project Github page.

A few observations:

  1. Add alt attributes where needed, in order to present text instead of images in case the email client was unable to load them properly.
  2. Add %s placeholders where you’d like the data to appear dynamically depending on the email use case.
  3. If you look carefully, the percentage values appear with an extra ‘%’. This is so the PHP library used to make this dynamic, knows how to parse the text properly.

Note! I’ve removed the URLs for security and privacy issues. Feel free to replace them with your own images and personal links.

And that is it! You’ve successfully developed your own email static template. Now let’s get our hands dirty and make it dynamic!

Building a dynamic template with PHP

On the server side, create the email send method below:

function send_mail_template($to, $from, $subject, $message)
  $headers = "MIME-Version: 1.0" . "\r\n";
  $headers .= "Content-type:text/html;charset=UTF-8" . "\r\n";
  $headers .= "From: ContactNameGoesHere <" . $from . ">\r\n";
  $response = mail($to, $subject, $message, $headers);

Now if you look carefully back to the template.html file, you’ll see that I’ve added %s placeholders in certain places. More particularly, in the image banner element, and body text.

All we need to do is import the above template.html file, parse it like regular text, and add the relevant text in place of the ‘%s’ and use the above send_mail_template method.

function build_email_template($email_subject_image, $message)
  // Get email template as string
  $email_template_string = file_get_contents('template.html', true);
  // Fill email template with message and relevant banner image
  $email_template = sprintf($email_template_string,'URL_to_Banner_Images/banner_' . $email_subject_image. '.png', $message, $mobile_plugin_string);
  return $email_template;

After we’ve got that taken care of, we can use both methods and send our very first dynamic email!

Let’s use an example. Say a new user has just verified his email. We’d like to automate that use case on the server side and send the user a ‘Your email has been successfully verified’ email.

Assume we have the users verified email ‘’ and the company’s email is ‘’.

We can now send an automated email:

$from = "";
$to = "";
$body_text = "Your email has been successfully verified...";
$banner_image_subject = "account_verified";
$final_message = build_email_template($banner_image_subject, $body_text);
send_email($to, $from, "You email has been verified", $final_message);

Finally! You can now use this methodology any way needed. After sending this example, while applying the GreenIQ’s company images and text, this is the final email template sent to the user:

Simple right?

Check out the full project on the project’s Github page!

Deploy Django app: Nginx, Gunicorn, PostgreSQL & Supervisor

Deploy Django app: Nginx, Gunicorn, PostgreSQL & Supervisor

Django is the most popular Python-based web framework for a while now. Django is powerful, robust, full of capabilities and surrounded by a supportive community. Django is based on models, views and templates, similarly to other MVC frameworks out there.

Django provides you with a development server out of the box once you start a new project using the commands:

$ django-admin startproject my_project 
$ python ./ runserver 8000

With two lines in the terminal, you can have a working development server on your local machine so you can start coding. One of the tricky parts when it comes to Django is deploying the project so it will be available from different devices around the globe. As technological entrepreneurs, we need to not only develop apps with backend and frontend but also deploy them to a production environment which has to be modular, maintainable and of course secure.

django dev server

Deployment of a Django app requires different mechanisms which will be listed. Before we begin, we need to perform an alignment in terms of the tools we are going to use throughout this post:

  1. Python version 2.7.6
  2. Django version 1.11
  3. Linux Ubuntu server hosted on DigitalOcean cloud provider
  4. Linux Ubuntu local machine
  5. Git repository containing your codebase

I assume you are already using 1, 2, 4 and 5. About the Linux server, we are about to create it together during the first step of the deployment tutorial. Please note that this post discusses deployment on a single Ubuntu server. This configuration is great for small projects, but in order to scale your resources up to support larger amounts of traffic, you should consider a high-availability server infrastructure, using load balancers, floating IP addresses, redundancy and more.

Linux is much more popular for serving web apps than Windows. Additionally, Python and Django work together very well with Linux, and not so well with Windows.

There are many reasons for choosing DigitalOcean as a cloud provider, especially for small projects that will be deployed on a single droplet (a virtual server in DigitalOcean terminology). DigitalOcean is a great solution for software projects and startups which start small and scale up step by step. Read more about my comparison between DigitalOcean and Amazon Web Services in terms of an early-stage startup software project.

There are some best practices for setting up your Django project I highly recommend you to follow before starting the deployment process. The best practices include working with a virtual environment, exporting requirements.txt file and configuring the file for working with multiple environments.

django best practices

This post will cover the deployment process of a Django project from A to Z on a brand-new Linux Ubuntu server. Feel free to choose your favorite cloud provider other than DigitalOcean for deployment.

As aforesaid, the built-in development server of Django is weak and is not built for scale. You can use it for developing your Django project yourself or share it with your co-workers, but not more than that. In order to serve your app in a production environment, we need to use several components that will talk to each other and make the magic happen. Hosting a web application usually requires the orchestration of three actors:

  1. Web server
  2. Gateway
  3. Application

The web server

The web server receives an HTTP request from the client (the browser) and is usually responsible for load balancing, proxying requests to other processes, serving static files, caching and more. The web server usually interprets the request and sends it to the gateway. Common web server and Apache and Nginx. In this tutorial, we will use Nginx (which is also my favorite).

The Gateway

The gateway translates the request received from the web server so the application can handle it. The gateway is often responsible for logging and reporting as well. We will use Gunicorn as our Gateway for this tutorial.

The Application

As you may already guess, the application refers to your Django app. The app takes the interpreted request, process it using the logic you implemented as a developer, and returns a response.

Assuming you have an existing ready-for-deployment Django project, we are going to deploy your project by following these steps:

  1. Creating a new DigitalOcean droplet
  2. Installing pre requisites: pip, virtual environment, and git
  3. Pulling the Django app from Git
  4. Setting up PostgreSQL
  5. Configuring Gunicorn with Supervisor
  6. Configuring Nginx for listening to requests
  7. Securing your deployed app: setting up firewall

Creating a droplet

A droplet in DigitalOcean refers to a virtual Linux server with CPU, RAM and disk space. The first step in this tutorial is about creating a new droplet and connect to it via SSH. Assuming your local machine is running Ubuntu, we are going to create a new SSH key pair in order to easily and securely connect to our droplet once it is created. Connection using SSH keys (rather than a password) is both more simple and secure. If you already have an SSH key pair, you can skip the creation process. On your local machine, enter in the terminal:

$ ssh-keygen -t rsa

You should get two more questions, where to locate the keys (the default is fine) and whether you want to set up a password (not essential).

Now the key pair is located in:


where is your public key and id_rsa is your private key. In order to use the key pair to connect to a remote server, the public key should be located on the remote server and the private key should be located on your local machine.

Notice that the public key can be located on every remote server you wish to connect to. But, the private key must be kept only on your local machine! Sharing the private key will enable other users to connect to your server.

After signing up with DigitalOcean, open the SSH page and click on the Add SSH Key button. In your terminal copy the newly-created public key:

$ cat /home/user/.ssh/

Enter the new public key you generated and name it as you wish.

SSH key

Now once the key is stored in your account, you can assign it with every droplet you create. The droplet will contain the key so you connect to it from your local machine, while password authentication will be disabled by default, which is highly recommended.

Now we are ready to create our droplet. Click on “Create Droplet” at the top bar of your DigitalOcean dashboard.

create droplet

Choose Ubuntu 16.04 64bit as your image, droplet size which is either 512MB RAM or 1GB, whatever region that makes sense to you.


image distro

droplet size

droplet region

You can select the private networking feature (which is not essential for this tutorial). Make sure to select the SSH key you’ve just added to your account. Name your new droplet and click “Create”.

private networking

select ssh keys

create droplet

Once your new droplet has been created, you should be able to connect to it easily using the SSH key you created. In order to do that, copy the IP address of your droplet from your droplets page inside your dashboard, go to your local terminal and type:


Make sure to replace withIP_ADDRESS_COPIED your droplet’s IP address. You should be already connected by now.

Tip for advanced users: in case you want to configure an even simpler way to connect, add an alias to your droplet by editing the file:

$ nano /home/user/.ssh/config

and adding:

Host remote-server-name 
    User root

Make sure to replace remote-server-name with a name of your choice, and DROPLET_IP_ADDRESS with the IP address of the server.

Save the file by hitting Ctrl+O and then close it with Ctrl+X. Now all you need to do in order to connect to your droplet is typing:

$ ssh remote-server-name

That simple.

Installing prerequisites

Once connected to your droplet, we are going to install some software in order to start our deployment process. Start by updating your repositories and installing pip and virtualenv.

$ sudo apt-get update $ sudo apt-get install python-pip python-dev build-essential libpq-dev postgresql postgresql-contrib nginx git virtualenv virtualenvwrapper $ export LC_ALL="en_US.UTF-8" $ pip install --upgrade pip $ pip install --upgrade virtualenv

Hopefully, you work with a virtual environment on your local machine. In case you don’t, I highly recommend you reading my best practices post for setting up a Django project in order to realize why working with virtual environments is an essential part of your Django development process.

Let’s get to configuring the virtual environment. Create a new folder with:

$ mkdir ~/.virtualenvs 
$ export WORKON_HOME=~/.virtualenvs

Configure the virtual environment wrapper for easier access by running:

$ nano ~/.bashrc

and adding this line to the end of the file:

. /usr/local/bin/

Tip: use Ctrl+V to scroll down faster, and Ctrl+Y to scroll up faster inside the nano editor.

Hit Ctrl+O to save the file and Ctrl+X to close it. In your terminal type:

$ . .bashrc

Now you should be able to create your new virtual environment for your Django project:

$ mkvirtualenv virtual-env-name

From within your virtual environment install:

(virtual-env-name) $ pip install django gunicorn psycopg2

Tip: Useful command for working with your virtual environment:

$ workon virtual-env-name # activate the virtual environment 
$ deactivate # deactivate the virtual environment

Pulling application from Git

Start by creating a new user that will hold your Django application:

$ adduser django 
$ cd /home/django 
$ git clone REPOSITORY_URL

Assuming your code base is already located in a Git repository, just type your password and your repository will be cloned into your remote server. You might need to add permissions to execute by navigating into your project folder (the one you’ve just cloned) and type:

$ chmod 755 ./

In order to take the virtual environment one step further in terms of simplicity, copy the path of your project’s main folder to the virtual environment settings by typing:

$ pwd > /root/.virtualenvs/virtual-env-name/.project

Make sure to replace virtual-env-name with the real name of your virtual environment. Now, once you use the workon command to activate your virtual environment, you’ll be navigated automatically to your project’s main path.

In order to setup the the environment variable properly, type:

$ nano /root/.virtualenvs/virtual-env-name/bin/postactivate # replace virtual-env-name with the real name

and add this line to the file:

export DJANGO_SETTINGS_MODULE=app.settings

Make sure to replace app.settings with the location of your settings module inside your Django app. Save and close the file.

Assuming you’ve set up your requirements.txt file as described in the Django best practices post, you’re now able to install all your requirements at once by navigating to the path of the requirements.txt file and run from within your virtual environment:

(virtual-env-name) $ pip install -r requirements.txt

Setting up PostgreSQL

Assuming you’ve set up your settings module as described in the Django best practices post, you should have by now a separation between the development and production settings files. Your settings file should contain PostgreSQL connection settings as well. If it doesn’t, add to the file:

    'default': { 
        'ENGINE': 'django.db.backends.postgresql', 
        'NAME': 'app_db', 
        'USER': 'app_user', 
        'PASSWORD': 'password', 
        'HOST': 'localhost', 
        'PORT': '5432', 

I highly recommend updating and pushing the file on your local machine and pulling it from the remote server using the repository we cloned.

Let’s get to creating the production database. Inside the terminal, type:

$ sudo -u postgres psql

Now you should be inside PostgreSQL terminal. Create your DB and user with:

> CREATE USER app_user WITH PASSWORD 'password'; 
> ALTER ROLE app_user SET client_encoding TO 'utf8'; 
> ALTER ROLE app_user SET default_transaction_isolation TO 'read committed'; 
> ALTER ROLE app_user SET timezone TO 'UTC'; 

Make sure your details here match the settings file DB configuration as described above. Exit the PostgreSQL shell by typing \q.

Now you should be ready to run migrations command on the new DB. Assuming all of your migrations folders are in the .gitignore file, meaning they are not pushed into the repository, your migrations folders should be empty. Therefore, you can set up the DB by navigating to your main project path with:

(virtual-env-name) $ cdproject

and then run:

(virtual-env-name) $ python ./ migrate
(virtual-env-name) $ python ./ makemigrations
(virtual-env-name) $ python ./ migrate

Don’t forget to create yourself a superuser by typing:

(virtual-env-name) $ python ./ createsuperuser

Configuring Gunicorn with Supervisor

Now once the application is set up properly, it’s time to configure our gateway for sending requests to our Django application. We will use Gunicorn as our gateway, which is commonly used.

Start by navigating to your project’s main path by typing:

(virtual-env-name) $ cdproject

First, we will test gunicorn by typing:

(virtual-env-name) $ gunicorn --bind app.wsgi:application

Make sure to replace app with your app’s name. Once gunicorn is running your application, you should be able to access http://IP_ADDRESS:8000 and see your application in action.

When you’re finished testing, hit Ctrl+C to stop gunicorn from running.

Now it’s time to operate gunicorn from a service to make sure it’s running continuously. Rather than setting up a systemd service, we will use a more robust way with Supervisor. Supervisor, as the name suggests, is a great tool for monitoring and controlling processes. It helps you understand better how your processes operate.

To install supervisor, type outside of your virtual environment:

$ sudo apt-get install supervisor

Once supervisor is running, every .conf file that is included in the path:


represents a monitored process. Let’s add a new .conf file to monitor gunicorn:

$ nano /etc/supervisor/conf.d/gunicorn.conf

and add into the file:

command=/root/.virtualenvs/virtual-env-name/bin/gunicorn --workers 3 --bind unix:/home/django/app-django/app/app.sock app.wsgi:application 


Make sure that all the references are properly configured. Save and close the file.

Now let’s update supervisor to monitor the gunicorn process we’ve just created by running:

$ supervisorctl reread 
$ supervisorctl update

In order to validate the process integrity, use this command:

$ supervisorctl status

By now, gunicorn operates as an internal process rather than a process that can be accessed by users outside the machine. In order to start sending traffic to gunicorn and then to your Django application, we will set up Nginx the serve as a web server.

Configuring Nginx

Nginx is one of the most popular web servers out there. The integration between Nginx and Gunicorn is seamless. In this section, we’re going to set up Nginx to send traffic to Gunicorn. In order to do that, we will create a new configuration file (make sure to replace app with your own app name):

$ nano /etc/nginx/site-available/app

then edit the file by adding:

server { 
    listen 80; 
    server_name SERVER_DOMAIN_OR_IP; 
    location = /favicon.ico { access_log off; log_not_found off; } 
    location /static/ { 
        root /home/django/app-django/app; 
    location / { 
        include proxy_params; 
        proxy_pass http://unix:/home/django/app-django/app/app.sock; 

This configuration will proxy requests to the appropriate route in your server. Make sure to set all the references properly according to Gunicorn and to your app configurations.

Initiate a link with:

$ ln -s /etc/nginx/sites-available/app /etc/nginx/sites-enabled

Check Nginx configuration by running:

$ nginx -t

Assuming all good, restart Nginx by running:

$ systemctl restart nginx

By now you should be able to access your server only by typing your IP in the browser because Nginx listens on port 80 which is the default port browsers use.


Well done! You should have a deployed Django app by now! Now it’s time to secure the app to make sure it’s much difficult to hack it. In order to do that, we will use ufw built-in Linux firewall.

ufw works by configuring rules. Rules tell the firewall which kind of traffic it should accept or decline. At this point, there are two kinds of traffic we want to accept, or in other words, two ports we want to open:

  1. port 80 for listening to incoming traffic via browsers
  2. port 22 to be able to connect to the server via SSH.

Open the port by typing:

$ ufw allow 80 
$ ufw allow 22

then enable ufw by typing:

$ ufw enable

Tip: before closing the terminal, make sure you are able to connect via SSH from another terminal to so you’re not locked outside your droplet due to bad configurations of the firewall.

What to do next?

This post is the ultimate guide to deploy a Django app on a single server. In case you’re developing an app that should serve larger amounts of traffic, I suggest you look into highly scalable server architecture. You can start with my post about how to design a high-availability server architecture.

3 best practices for better setting up your Django project

3 best practices for better setting up your Django project

Django is a robust open source Python-based framework for building web applications. Django has gained an increase in its popularity during the last couple of years, and it is already mature and widely-used with a large community behind it. Among other Python-based frameworks for creating web applications (Like Flask and Pyramid), Django is by far the most popular. It supports both Python version 2.7 and Python 3.6 but as for the time of this article being written, Python 2.7 is still the more accessible version in terms of community, 3rd party packages, and online documentation. Django is secured when used properly and provides high dimensions of flexibility, therefore is the way to go when developing server-side applications using Python.

In this article, I will share with you best practices of a Django setup I’ve learned and collected over the recent years. Whether you have a few Django projects under your belt, or you’re just about to start your first Django project from scratch, the collection described here might help you create better applications down the road. The article has been written from a very practical mindset so you can add some tools to your development toolbox immediately, or even create yourself an advanced custom Django boilerplate for your next projects.

* In this article I assume you’re using a Linux Ubuntu machine.

Virtual Environment

While developing Python-based applications, using 3rd party packages are an ongoing thing. Typically, these packages are being updated often so keeping them organized is essential. When developing more and more different projects on the same local machine, it’s challenging to keep track on the current version of each package, and impossible to use different versions of the same package for different projects. Moreover, updating a package on one project might break functionality on another, and vice versa. That’s where Python Virtual Environment comes handy. To install virtual environment use:

$ apt-get update
$ apt-get install python-pip python-dev build-essential

$ export LC_ALL="en_US.UTF-8" # might be necessary in case you get an error from the next line

$ pip install --upgrade pip
$ pip install --upgrade virtualenv
$ mkdir ~/.virtualenvs
$ pip install virtualenvwrapper
$ export WORKON_HOME=~/.virtualenvs
$ nano ~/.bashrc

add this line to the end of the file:

. /usr/local/bin/

then execute:

$ . .bashrc

After installing, create a new virtual environment for your project by typing:

$ mkvirtualenv project_name

While you’re in the context of your virtual environment you’ll notice a prefix that is being added to the terminal, like:

(project_name) ofir@playground:~$

In order to deactivate (exit) the virtual environment and getting back to the main Python context of your local machine, use:

$ deactivate

In order to activate (start) the virtual environment context, use:

$ workon project_name

To list the virtual environments exist in your local machine, use:

$ lsvirtualenv

Holding your project dependencies (packages) in a virtual environment on your machine allows you to keep them in an isolated environment and only use them for a single (or multiple) projects. When creating a new virtual environment you’re starting a fresh environment with no packages installed in it. Then you can use, for example:

(project_name) $ pip install Django

for installing Django in your virtual environment, or:

(project_name) $ pip install Django==1.11

for installing version 1.11 of Django accessible only from within the environment.

Neither your main Python interpreter nor the other virtual environments on your machine will be able to access the new Django package you’ve just installed.

In order to use the runserver command using your virtual environment, while in the context of the virtual environment, use:

(project_name) $ cd /path/to/django/project
(project_name) $ ./ runserver

Likewise, when entering the Python interpreter from within the virtual environment by typing:

(project_name) $ python

it will have access to packages you’ve already installed inside the environment.


Requirements are the list of Python packages (dependencies) your project is using while running, including version for each package. Here’s an example for a requirements.txt file:


Keeping your requirements.txt file up to date is essential for collaborating properly with other developers, as well as keeping your production environment properly configured. This file, when included in your code repository, enables you to update all the packages installed in your virtual environment by executing a single line in the terminal, and by that to get new developers up and running in no time. In order to generate a new requirements.txt or to update an existing one, use from within your virtual environment:

(project_name) $ pip freeze > requirements.txt

For your convenience, make sure to execute this command in a folder that is being tracked by your Git repository so other instances of the code will have access to the requirements.txt file as well.

Once a new developer is joining the team, or you want to configure a new environment using the same packages listed in the requirements.txt file, execute in the virtual environment context:

(project_name) $ cd /path/to/requirements/file
(project_name) $ pip install -r requirements.txt

All requirements listed in the file will immediately be installed in your virtual environment. Older versions will be updated and newer versions will be downgraded to fit the exact list of requirements.txt. Be careful though, because there might be differences sometimes between different environments that you still want to respect.

I highly recommend integrating these commands to your work flow: updating the requirements.txt file before pushing code to the repository and installing requirements.txt file after pulling code from the repository.

Better Configuration

Django comes out-of-the-box with very basic yet useful file, defines the main and most useful configurations for your project. The file is very straightforward, but sometimes, as a developer working in a team, or when settings up a production environment, you often need more than one basic file.

Multiple settings files allow you to easily define tailor-made configurations for each environment separately like:

ALLOWED_HOSTS # for production environment
DATABASES # for different developers on the same team

Let me introduce you to an extended approach for configuring your file which allows you to easily maintain different versions and use the one you want in any given time and environment in no time.

First, navigate to your file path:

(project_name) $ cd /path/to/settings/file

Then create a new module called settings (module is a folder containing an file):

(project_name) $ mkdir settings

Now, rename your file to and place it inside the new module you created:

(project_name) $ mv settings/

For this example, I assume that you want to configure one settings file for your development environment and one for your production environment. You can use the exact same approach for defining different settings files for different developers in the same team.

For your development environment create:

(project_name) $ nano settings/

Then type:

from .base import *

DEBUG = True

and save the file by hitting Ctrl + O, Enter and then Ctrl + X.

For your production environment create:

(project_name) $ nano settings/

and type:

from .base import *

DEBUG = False

Now, whenever you want to add or update settings of a specific environment you can easily do it in its own settings file. The last question that should be asked is how Django knows which settings file to load on each environment? And the answer is: that’s what the file is used for. When Django looks for the it used to load when running the server, for example, it now finds a settings module rather than a file. But as long as it’s a module containing an __init.py__ file, as far as Django is concerned, it’s the exact same thing. Django will load the file and execute whatever written in it. Therefore, we need to define which settings file we want to load inside the file, by executing:

(project_name) $ settings/

and then, for a production environment, for example, typing:

from .production import *

This way, Django will load all the and settings every time it starts. Magic?

Now, the only configuration left is to keep the in your .gitignore file so it will not be included in pushes and pulls. Once you set up a new environment, don’t forget to create a new file inside the settings module and import the settings file required exactly like we did before.

In this article we’ve covered three best practices for better setting up your Django project:

  • Working inside a virtual environment
  • Keeping requirements.txt file up to date and use it continuously in your work flow
  • Setting up a better project settings array.

This is part 1 in the series about best practices for Django development. Follow me to get an immediate update once the next parts will be available.

Have you followed these best practices in your last project? Do you have any insights to share? Comments are highly appreciated.