Tag: technology

Command & Conquer: Startup Law Essentials

Command & Conquer: Startup Law Essentials

Command & Conquer: Startup Law Essentials

JoiRyde, my startup, had failed…but the mourning period was over. It was time to come back — smarter, faster and stronger than before.

I began the post mortem and quickly realized that one of our glaring mistakes was lack of legal structure. Looking back, my co-founders and I preferred focusing on what was easiest for us — the code. The legal ins and outs were tricky, and we delayed them until it was absolutely necessary (and too late!) Never again; I decided it was time to educate myself.I sought counsel with my friend Idan Bar-Dov. Idan is a lawyer who specializes in startups and if Suits was real, this guy would be Harvey.

In this post, we try to demystify complex legal terms and clarify key issues usually encountered when starting your own company.

Just so we’re clear: This post is intended for informational purposes only and does not constitute legal advice.

Company 101

Ha! I know what you’re thinking! “I know what a company is dude … get to the goood stuff”.

Though we intuitively understand what a company is, its legal definition is quite different from what we know.

There are different kinds of companies. However, in the context of the startup world, when someone says “I just started my own company”, it usually means that they have incorporated a private for-profit organization. In other words, it’s a privately-operated legal entity intended for financial gain.

Legally speaking, a company is independent. It can enter into agreements, assume obligations, and be held accountable for its actions. Incorporators may share a great deal of interest with their company, but legally they are not the company. The employees, founders, board members, and even the CEO are all a part of the company, but combined do not equate the company itself.

Nevertheless, companies are run by people. Like a ship, steered by the captain and crew, companies are lead by their Board of Directors and operated by the people who control the “Company Organs”. These include shareholders, directors, officers etc., who affect the decision-making processes at their different capacities.

They’re on a boat

The Upside (and its limit)

Founders are usually the individuals who “give birth” to the company. They fill out the paperwork, pay the incorporation fee and sign the relevant documents.

In this context remember the following rule of thumb: No Registration = No Incorporation! Namely, your Company registration must be approved by the state.

The main benefit of incorporating is gaining a distinction between individual and company — a.k.a. a “corporate veil”. As separate legal entities, companies are accountable for their debts and obligations, as opposed to the individuals who operate them. In other words, each individual’s contribution marks the limit of their potential financial risk. Hence the term “limited liability company”.

However, the veil’s protection is not absolute. In certain cases Individuals may be held personally accountable for their actions. For example, when using a company for fraud, illegal actions, or to circumvent existing obligations.

Equity — Make it Rain

Crash course definition: equity = holdings, owning a chunk of the company.

Two co-founders splitting equity

When founders say equity they usually refer to shares. This is the company’s “currency”, which entitles its holders to certain privileges, such as financials, control, and information rights.

There is no gold standard in dividing equity between co-founders, it’s subject to your discretion. I believe that founders should be equal partners. Idan thinks that equity should reflect skill and contribution (Y Combinator has researched this extensively, you can read more here).

Exodus — Founder departure

Once equity has been issued, a founder’s reason of departure won’t matter much. They are already an equity owner — it’s theirs, no turning back.

Such a departure may generate “dead equity”. Shares which are held by a founder who is no longer involved with the company. This creates an involvement-equity mismatch, which has negative impact, especially on the company’s fundraising front.

The solution is defining the founders’ relationship upfront, by entering into a Founders Agreement. A departure case must be addressed before it occurs, preferably even before incorporating. Think of a founders agreement like a prenup, which obviously should be signed before getting married.

Equity-wise, it is customary to include a “vesting schedule”, which determines the founders equity eligibility. The purpose of a vesting arrangement is to incentivize founders to remain engaged with the company and diminish any involvement-equity mismatch.

A standard time-based vesting schedule will specify that if a founder leaves before date X, then he will give-up Y equity. For example, in a linear 3 years vesting schedule, if a founder leaves after 1 year, then he gets to keep ⅓ of his shares, and has transfer the other ⅔ to the remaining founders. It’s a quid pro quo arrangement: work = equity.

Things to Remember

  1. Don’t procrastinate — Even it seems dull, take care of your legal foundations from the get go and don’t wait until it’s too late.
  2. Expectation Management- Defining the Founders’ relationship early on helps all founders see eye to eye.
  3. Attracting Investors — Even if your product or tech is extremely impressive, a neat legal duck is always a plus. It conveys awareness and shows investors you’re sincere, organized and devoted.
  4. Prevention is the best medicine — Building a company is an intense endeavor, and it can bring out a drastically emotional side of you. So set up defense mechanisms before taking-off with your team, when everyone is as clear-headed and calm as possible.

If you liked this article, feel free to share. You can follow Idan on Linkedin. You can follow me on Linkedin, Medium or Github for more technology / entrepreneurship content.

Deploy Django app: Nginx, Gunicorn, PostgreSQL & Supervisor

Deploy Django app: Nginx, Gunicorn, PostgreSQL & Supervisor

Django is the most popular Python-based web framework for a while now. Django is powerful, robust, full of capabilities and surrounded by a supportive community. Django is based on models, views and templates, similarly to other MVC frameworks out there.

Django provides you with a development server out of the box once you start a new project using the commands:

$ django-admin startproject my_project 
$ python ./manage.py runserver 8000

With two lines in the terminal, you can have a working development server on your local machine so you can start coding. One of the tricky parts when it comes to Django is deploying the project so it will be available from different devices around the globe. As technological entrepreneurs, we need to not only develop apps with backend and frontend but also deploy them to a production environment which has to be modular, maintainable and of course secure.

django dev server

Deployment of a Django app requires different mechanisms which will be listed. Before we begin, we need to perform an alignment in terms of the tools we are going to use throughout this post:

  1. Python version 2.7.6
  2. Django version 1.11
  3. Linux Ubuntu server hosted on DigitalOcean cloud provider
  4. Linux Ubuntu local machine
  5. Git repository containing your codebase

I assume you are already using 1, 2, 4 and 5. About the Linux server, we are about to create it together during the first step of the deployment tutorial. Please note that this post discusses deployment on a single Ubuntu server. This configuration is great for small projects, but in order to scale your resources up to support larger amounts of traffic, you should consider a high-availability server infrastructure, using load balancers, floating IP addresses, redundancy and more.

Linux is much more popular for serving web apps than Windows. Additionally, Python and Django work together very well with Linux, and not so well with Windows.

There are many reasons for choosing DigitalOcean as a cloud provider, especially for small projects that will be deployed on a single droplet (a virtual server in DigitalOcean terminology). DigitalOcean is a great solution for software projects and startups which start small and scale up step by step. Read more about my comparison between DigitalOcean and Amazon Web Services in terms of an early-stage startup software project.

There are some best practices for setting up your Django project I highly recommend you to follow before starting the deployment process. The best practices include working with a virtual environment, exporting requirements.txt file and configuring the settings.py file for working with multiple environments.

django best practices

This post will cover the deployment process of a Django project from A to Z on a brand-new Linux Ubuntu server. Feel free to choose your favorite cloud provider other than DigitalOcean for deployment.

As aforesaid, the built-in development server of Django is weak and is not built for scale. You can use it for developing your Django project yourself or share it with your co-workers, but not more than that. In order to serve your app in a production environment, we need to use several components that will talk to each other and make the magic happen. Hosting a web application usually requires the orchestration of three actors:

  1. Web server
  2. Gateway
  3. Application

The web server

The web server receives an HTTP request from the client (the browser) and is usually responsible for load balancing, proxying requests to other processes, serving static files, caching and more. The web server usually interprets the request and sends it to the gateway. Common web server and Apache and Nginx. In this tutorial, we will use Nginx (which is also my favorite).

The Gateway

The gateway translates the request received from the web server so the application can handle it. The gateway is often responsible for logging and reporting as well. We will use Gunicorn as our Gateway for this tutorial.

The Application

As you may already guess, the application refers to your Django app. The app takes the interpreted request, process it using the logic you implemented as a developer, and returns a response.


Assuming you have an existing ready-for-deployment Django project, we are going to deploy your project by following these steps:

  1. Creating a new DigitalOcean droplet
  2. Installing pre requisites: pip, virtual environment, and git
  3. Pulling the Django app from Git
  4. Setting up PostgreSQL
  5. Configuring Gunicorn with Supervisor
  6. Configuring Nginx for listening to requests
  7. Securing your deployed app: setting up firewall

Creating a droplet

A droplet in DigitalOcean refers to a virtual Linux server with CPU, RAM and disk space. The first step in this tutorial is about creating a new droplet and connect to it via SSH. Assuming your local machine is running Ubuntu, we are going to create a new SSH key pair in order to easily and securely connect to our droplet once it is created. Connection using SSH keys (rather than a password) is both more simple and secure. If you already have an SSH key pair, you can skip the creation process. On your local machine, enter in the terminal:

$ ssh-keygen -t rsa

You should get two more questions, where to locate the keys (the default is fine) and whether you want to set up a password (not essential).

Now the key pair is located in:

/home/user/.ssh/

where id_rsa.pub is your public key and id_rsa is your private key. In order to use the key pair to connect to a remote server, the public key should be located on the remote server and the private key should be located on your local machine.

Notice that the public key can be located on every remote server you wish to connect to. But, the private key must be kept only on your local machine! Sharing the private key will enable other users to connect to your server.

After signing up with DigitalOcean, open the SSH page and click on the Add SSH Key button. In your terminal copy the newly-created public key:

$ cat /home/user/.ssh/id_rsa.pub

Enter the new public key you generated and name it as you wish.

SSH key

Now once the key is stored in your account, you can assign it with every droplet you create. The droplet will contain the key so you connect to it from your local machine, while password authentication will be disabled by default, which is highly recommended.

Now we are ready to create our droplet. Click on “Create Droplet” at the top bar of your DigitalOcean dashboard.

create droplet

Choose Ubuntu 16.04 64bit as your image, droplet size which is either 512MB RAM or 1GB, whatever region that makes sense to you.

 

image distro

droplet size

droplet region

You can select the private networking feature (which is not essential for this tutorial). Make sure to select the SSH key you’ve just added to your account. Name your new droplet and click “Create”.

private networking

select ssh keys

create droplet

Once your new droplet has been created, you should be able to connect to it easily using the SSH key you created. In order to do that, copy the IP address of your droplet from your droplets page inside your dashboard, go to your local terminal and type:

$ ssh root@IP_ADDRESS_COPIED

Make sure to replace withIP_ADDRESS_COPIED your droplet’s IP address. You should be already connected by now.

Tip for advanced users: in case you want to configure an even simpler way to connect, add an alias to your droplet by editing the file:

$ nano /home/user/.ssh/config

and adding:

Host remote-server-name 
    Hostname DROPLET_IP_ADDRESS 
    User root

Make sure to replace remote-server-name with a name of your choice, and DROPLET_IP_ADDRESS with the IP address of the server.

Save the file by hitting Ctrl+O and then close it with Ctrl+X. Now all you need to do in order to connect to your droplet is typing:

$ ssh remote-server-name

That simple.

Installing prerequisites

Once connected to your droplet, we are going to install some software in order to start our deployment process. Start by updating your repositories and installing pip and virtualenv.

$ sudo apt-get update $ sudo apt-get install python-pip python-dev build-essential libpq-dev postgresql postgresql-contrib nginx git virtualenv virtualenvwrapper $ export LC_ALL="en_US.UTF-8" $ pip install --upgrade pip $ pip install --upgrade virtualenv

Hopefully, you work with a virtual environment on your local machine. In case you don’t, I highly recommend you reading my best practices post for setting up a Django project in order to realize why working with virtual environments is an essential part of your Django development process.

Let’s get to configuring the virtual environment. Create a new folder with:

$ mkdir ~/.virtualenvs 
$ export WORKON_HOME=~/.virtualenvs

Configure the virtual environment wrapper for easier access by running:

$ nano ~/.bashrc

and adding this line to the end of the file:

. /usr/local/bin/virtualenvwrapper.sh

Tip: use Ctrl+V to scroll down faster, and Ctrl+Y to scroll up faster inside the nano editor.

Hit Ctrl+O to save the file and Ctrl+X to close it. In your terminal type:

$ . .bashrc

Now you should be able to create your new virtual environment for your Django project:

$ mkvirtualenv virtual-env-name

From within your virtual environment install:

(virtual-env-name) $ pip install django gunicorn psycopg2

Tip: Useful command for working with your virtual environment:

$ workon virtual-env-name # activate the virtual environment 
$ deactivate # deactivate the virtual environment

Pulling application from Git

Start by creating a new user that will hold your Django application:

$ adduser django 
$ cd /home/django 
$ git clone REPOSITORY_URL

Assuming your code base is already located in a Git repository, just type your password and your repository will be cloned into your remote server. You might need to add permissions to execute manage.py by navigating into your project folder (the one you’ve just cloned) and type:

$ chmod 755 ./manage.py

In order to take the virtual environment one step further in terms of simplicity, copy the path of your project’s main folder to the virtual environment settings by typing:

$ pwd > /root/.virtualenvs/virtual-env-name/.project

Make sure to replace virtual-env-name with the real name of your virtual environment. Now, once you use the workon command to activate your virtual environment, you’ll be navigated automatically to your project’s main path.

In order to setup the the environment variable properly, type:

$ nano /root/.virtualenvs/virtual-env-name/bin/postactivate # replace virtual-env-name with the real name

and add this line to the file:

export DJANGO_SETTINGS_MODULE=app.settings

Make sure to replace app.settings with the location of your settings module inside your Django app. Save and close the file.

Assuming you’ve set up your requirements.txt file as described in the Django best practices post, you’re now able to install all your requirements at once by navigating to the path of the requirements.txt file and run from within your virtual environment:

(virtual-env-name) $ pip install -r requirements.txt

Setting up PostgreSQL

Assuming you’ve set up your settings module as described in the Django best practices post, you should have by now a separation between the development and production settings files. Your production.py settings file should contain PostgreSQL connection settings as well. If it doesn’t, add to the file:

DATABASES = { 
    'default': { 
        'ENGINE': 'django.db.backends.postgresql', 
        'NAME': 'app_db', 
        'USER': 'app_user', 
        'PASSWORD': 'password', 
        'HOST': 'localhost', 
        'PORT': '5432', 
    } 
}

I highly recommend updating and pushing the file on your local machine and pulling it from the remote server using the repository we cloned.

Let’s get to creating the production database. Inside the terminal, type:

$ sudo -u postgres psql

Now you should be inside PostgreSQL terminal. Create your DB and user with:

> CREATE DATABASE app_db; 
> CREATE USER app_user WITH PASSWORD 'password'; 
> ALTER ROLE app_user SET client_encoding TO 'utf8'; 
> ALTER ROLE app_user SET default_transaction_isolation TO 'read committed'; 
> ALTER ROLE app_user SET timezone TO 'UTC'; 
> ALTER USER app_user CREATEDB; 
> GRANT ALL PRIVILEGES ON DATABASE app_db TO app_user;

Make sure your details here match the production.py settings file DB configuration as described above. Exit the PostgreSQL shell by typing \q.

Now you should be ready to run migrations command on the new DB. Assuming all of your migrations folders are in the .gitignore file, meaning they are not pushed into the repository, your migrations folders should be empty. Therefore, you can set up the DB by navigating to your main project path with:

(virtual-env-name) $ cdproject

and then run:

(virtual-env-name) $ python ./manage.py migrate
(virtual-env-name) $ python ./manage.py makemigrations
(virtual-env-name) $ python ./manage.py migrate

Don’t forget to create yourself a superuser by typing:

(virtual-env-name) $ python ./manage.py createsuperuser

Configuring Gunicorn with Supervisor

Now once the application is set up properly, it’s time to configure our gateway for sending requests to our Django application. We will use Gunicorn as our gateway, which is commonly used.

Start by navigating to your project’s main path by typing:

(virtual-env-name) $ cdproject

First, we will test gunicorn by typing:

(virtual-env-name) $ gunicorn --bind 0.0.0.0:8000 app.wsgi:application

Make sure to replace app with your app’s name. Once gunicorn is running your application, you should be able to access http://IP_ADDRESS:8000 and see your application in action.

When you’re finished testing, hit Ctrl+C to stop gunicorn from running.

Now it’s time to operate gunicorn from a service to make sure it’s running continuously. Rather than setting up a systemd service, we will use a more robust way with Supervisor. Supervisor, as the name suggests, is a great tool for monitoring and controlling processes. It helps you understand better how your processes operate.

To install supervisor, type outside of your virtual environment:

$ sudo apt-get install supervisor

Once supervisor is running, every .conf file that is included in the path:

/etc/supervisor/conf.d

represents a monitored process. Let’s add a new .conf file to monitor gunicorn:

$ nano /etc/supervisor/conf.d/gunicorn.conf

and add into the file:

[program:gunicorn] 
directory=/home/django/app-django/app 
command=/root/.virtualenvs/virtual-env-name/bin/gunicorn --workers 3 --bind unix:/home/django/app-django/app/app.sock app.wsgi:application 
autostart=true 
autorestart=true 
stderr_logfile=/var/log/gunicorn/gunicorn.out.log 
stdout_logfile=/var/log/gunicorn/gunicorn.err.log 
user=root 
group=www-data 
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8 

[group:guni] 
programs:gunicorn

Make sure that all the references are properly configured. Save and close the file.

Now let’s update supervisor to monitor the gunicorn process we’ve just created by running:

$ supervisorctl reread 
$ supervisorctl update

In order to validate the process integrity, use this command:

$ supervisorctl status

By now, gunicorn operates as an internal process rather than a process that can be accessed by users outside the machine. In order to start sending traffic to gunicorn and then to your Django application, we will set up Nginx the serve as a web server.

Configuring Nginx

Nginx is one of the most popular web servers out there. The integration between Nginx and Gunicorn is seamless. In this section, we’re going to set up Nginx to send traffic to Gunicorn. In order to do that, we will create a new configuration file (make sure to replace app with your own app name):

$ nano /etc/nginx/site-available/app

then edit the file by adding:

server { 
    listen 80; 
    server_name SERVER_DOMAIN_OR_IP; 
    location = /favicon.ico { access_log off; log_not_found off; } 
    location /static/ { 
        root /home/django/app-django/app; 
    } 
    location / { 
        include proxy_params; 
        proxy_pass http://unix:/home/django/app-django/app/app.sock; 
    } 
}

This configuration will proxy requests to the appropriate route in your server. Make sure to set all the references properly according to Gunicorn and to your app configurations.

Initiate a link with:

$ ln -s /etc/nginx/sites-available/app /etc/nginx/sites-enabled

Check Nginx configuration by running:

$ nginx -t

Assuming all good, restart Nginx by running:

$ systemctl restart nginx

By now you should be able to access your server only by typing your IP in the browser because Nginx listens on port 80 which is the default port browsers use.

Security

Well done! You should have a deployed Django app by now! Now it’s time to secure the app to make sure it’s much difficult to hack it. In order to do that, we will use ufw built-in Linux firewall.

ufw works by configuring rules. Rules tell the firewall which kind of traffic it should accept or decline. At this point, there are two kinds of traffic we want to accept, or in other words, two ports we want to open:

  1. port 80 for listening to incoming traffic via browsers
  2. port 22 to be able to connect to the server via SSH.

Open the port by typing:

$ ufw allow 80 
$ ufw allow 22

then enable ufw by typing:

$ ufw enable

Tip: before closing the terminal, make sure you are able to connect via SSH from another terminal to so you’re not locked outside your droplet due to bad configurations of the firewall.

What to do next?

This post is the ultimate guide to deploy a Django app on a single server. In case you’re developing an app that should serve larger amounts of traffic, I suggest you look into highly scalable server architecture. You can start with my post about how to design a high-availability server architecture.

How to choose a cloud computing technology for your startup

How to choose a cloud computing technology for your startup

Cloud computing technology becomes a standard when talking about developing applications nowadays. A few years ago, companies were enforced to have dedicated teams for configuring, running and maintaining server rooms which made it extremely difficult to scale up easily and offer a sustainable product. For small startups, it was even more difficult due to the lack of human resource as well as funding.

In present days, not only there are cloud computing technologies for almost every architecture you might imagine, but the cloud vendors also compete nonstop about our (the developers) attention. Most of the largest tech companies, like Google, Amazon and IBM launched cloud services in the past few years. They advertise, offer free tiers, present in tech conferences and conduct free-of-charge workshops for experiencing with their cloud solutions. They are aware that once you fall in love with their services, it will most likely be your favorite choice in every project for years to come.

So what is a cloud provider anyway? A cloud provider is an entity that offers cloud services for operating your application. Operating may include running servers, serving your application, hosting static files, providing database solutions, handling networking between servers, managing DNS and much more. Different cloud vendors offer different levels of abstractions in their services, usually defined as IaaS vs. PaaS.

cloud computing technology

IaaS (Infrastructure-as-a-service)

IaaS, or infrastructure-as-a-service, refers to a low-level solution, like providing a Linux Ubuntu server with nothing installed on it. This kind of solutions is suitable for more advanced developers who have experience with designing, configuring and securing servers infrastructure in all aspects. IaaS services provide you with flexibility and scalability down the road, and this will most likely be the way to go when designing application for scale. This approach requires, as already mentioned before, at least one developer in your startup who has this skill-set, otherwise, your product will turn into a big mess sooner than later.

PaaS (Platform-as-a-service)

PaaS, or platform-as-a-service, refers to a fully-maintained and managed environment that is hidden under a layer of abstraction you should not even care of. The cloud vendor takes care of maintaining the servers needed for the operations for you, and you get high-level databases for storing your data, services for user authentication, endpoints for client side applications etc. This approach is much easier and faster to get up and running with, and typically satisfies most of the basic applications. You should take into consideration though, that for more complex architectures it might not be enough.

Generally speaking, both IaaS and PaaS are huge time-savers when dealing with deploying and serving applications. You are able to run a server with a click-of-a-button and usually pay per use. Scaling your servers can be done manually or even automatically using APIs when a peak in traffic suddenly occurs. You can be sure that you’re in a good company (as long as you choose wisely) and whatever you can imagine, you can basically create.

cloud providers list

In early-stage startups, using cloud computing technologies became a standard because of the flexibility, the pricing models and the accessibility. Choosing the best cloud service for your startup is an essential task every technological entrepreneur must perform. As the head of development in your company, you should know the differences between the main alternatives, and choose the one that suits your product best.

Technical debts may stack up in a case of a bad decision. In addition, migrating an entire architecture from one cloud provider to another is not considered to be a trivial task at all. Therefore, you should be able to know the differences, experiment with each of the main alternatives and make a wise decision.

cloud services comparisonAfter examining and experiencing the best cloud providers out there:

and using them in a wide variety of project, I’ll take my top two: AWS and DigitalOcean and compare them using a set of parameters.

 

I’ve chosen these two cloud providers to be my best choice after grading each of them using the most important parameters when building a startup from the ground up:

  1. Features (offering) – how wide is the range of available cloud computing technologies, integrations and possibilities for the next generations of your application. In order to build for scale, you need to be sure that a cloud vendor can support your application for years to come.
  2. Pricing – Available pricing models, free tiers for startups and pricing transparency. Early-stage startups (startups that fund themselves) look for the largest value possible in the lowest price.
  3. Ease of use – How fast an intermediate developer can build a basic cloud architecture and deploy his application, How easy is it to iterate over the existing cloud architecture and what about the learning curve for beginners.
  4. Tutorials and support – Availability of online resources to help you get up and running with different services, as well as human customer support accessibility.

Three, two, one, fight!

Features

How wide is the range of services offered?

Amazon Web Services: AWS has by far the widest range of services and it comes to offerings. If you don’t find a cloud computing technology under AWS manifest, you’ll most likely not find it anywhere else. AWS has many different IaaS and PaaS services dedicated to every task needed to be performed by a server, divided into organized categories. When using AWS you can be sure that your startup scalability is potentially endless. On the other hand, the offering might sometimes be confusing for beginners as it makes the getting started process a little longer. If your application has many custom components that are not trivial, AWS might be the cloud provider you should consider.

Grade: 5/5

 

DigitalOcean: DigitalOcean offers a relatively narrow range of services. As for IaaS, you can find droplets (servers), data storage units, networking and monitoring services. As for PaaS, you can easily deploy apps with zero configuration needed, like Node.js, Redis, Docker etc. Although the offering is very concise, I find it to be exactly what you need for more than 80% of the applications. In addition to the standard droplets, high CPU and high memory droplets are available for custom use, as well as backups and snapshots for each droplet. DigitalOcean team is working nonstop on increasing their offering based on the community requests. As a developer who uses DigitalOcean for quite a lot of time now, I can admit that their desire to satisfy their community is highly appreciated.

Grade: 4/5

Amazon Web Services cloud provider

Pricing

Pricing models available and transparency

Amazon Web Services: AWS is based on a pay per use pricing model. Every cloud computing technology has its own unique pricing and a pricing calculator is available for trying to estimate your costs upfront. You might find this calculator a bit complex if you haven’t used AWS before. In order to estimate your costs up front you need to translate your servers architecture design into AWS terms, and then try and estimate by choosing the appropriate services from the sidebar. I find the wide range of offering sometimes overshadows the costs estimations, so I find it useful sometimes to start firing up services and tracking the costs inside the dashboard using pricing alerts. On the other hand, AWS offers a very useful free tier for 12 months that can help early-stage startups get up and running.

Grade: 3.5/5

 

DigitalOcean: DigitalOcean extremely transparent pricing models exist in two different yet similar approaches: pay per hour and pay per month. When using DigitalOcean you have no surprises. You can calculate the exact amount that will be charged, due to fixed prices for each droplet unit. Starting at $5/month for a 512MB droplet, DigitalOcean is suitable also for tiny side projects. Besides droplets and data storage units that are charged according to the resources allocated to you, networking, monitoring, alerts, DNS management and more, are completely free of charge. Bottom line, you pay only for the allocated resources, and you get a lot of useful extra components as a free of charge service.

Grade: 4.5/5

 

Ease of use

How easy is it to get up and running as well as to iterate

Amazon Web Services: AWS dashboard is quite comfortable once you get used to it. Because of the large amounts of services, you might find it a bit overcrowded in comparison to the other alternatives presented here. You can use the default settings for your services and then to get up and running relatively quickly, but if you’d like to dive deeper into details (also for reducing costs) you might find yourself spending quite a lot of time on configurations using AWS dashboard. On the other hand, in large-scaled applications, you can find the additional features available for each service extremely useful and necessary.

Grade: 4/5

 

DigitalOcean: DigitalOcean is branded for a reason as “Cloud computing, designed for developers”. As developers, we have so many things to take care of, especially when in charge of the end-to-end technological stack of our startup. Therefore, we need our cloud provider to be as simple as possible to setup. DigitalOcean’s user interface is the best I’ve used. It’s intuitive and let you get up and running in minutes even when using it for the first time. You don’t need to explore and scroll over too many features and options, just choose your Linux distribution, plan and geographic location, and you’re up and running in no time.

Grade: 5/5

DigitalOcean droplet creation

Tutorials and support

Available resources and support team

Amazon Web Services: AWS has a very useful tutorials library. There are many tutorials, but ones sometimes seem to be less detailed and user-friendly than others. You need to be experienced with servers infrastructure design before accessing many of the AWS tutorials. So, it might take you some time to explore their library before you’ll be able to actually find what you’re looking for. On the other hand, their customer support team is extraordinary. AWS support agents are super responsive and sensitive and will answer your questions in a professional way.

Grade: 4/5

 

DigitalOcean: The tutorials library of DigitalOcean is endless. In almost every Google search about a topic related to servers or cloud infrastructure, you’ll find results from DigitalOcean tutorials library. The tutorials are well-written and cover important principals alongside with the technicalities of how to achieve your goal. In addition to accomplishing your task, you’re actually learning new things when following DigitalOcean’s tutorials. The support team is very responsive and professional, and free of charge virtual meetings are available with cloud specialists to help you design the architecture of your server.

Grade: 5/5

 

Summary – choosing the best cloud computing technology for your startup

Amazon Web Services: AWS is by far the leading cloud provider when it comes to offering, scalability and features. On the other hand, its learning curve is moderate, so if you haven’t experienced with AWS before, it might take you some time to get up and running with properly.

Final startup grade: 4.5/5

 

DigitalOcean: I like comparing DigitalOcean to a boutique hotel. When using their cloud computing technologies you feel like you’re part of a family and treated like one. DigitalOcean covers everything you need as an early-stage startup, it is easy to use and provides expected convenient pricing models.

Final startup grade: 5/5

best cloud solution

The most important thing about cloud provider is to have one. In our world, it’s much better to have your application deployed in a little smaller cloud provider than keep arguing about which cloud provider is better when you have no idea where your application will be 6 months from now.

If you’re familiar with one of the cloud vendors, use it for your main startup unless you’re sure it will not meet your requirements.

When developing side projects, I highly encourage you to try and play with new cloud providers. Who knows, maybe you’ll fall in love with another.

Try DigitalOcean with $10 credit

Try AWS free tier