Send Deno to the Cloud (Deploy)

When it comes to deploying Deno applications, we have several options. Each deployment method offers unique benefits that address different use cases and requirements. Let’s take a closer look at these deployment strategies.

Standalone binary deployment

Binary deployment is one of the simplest approaches for deploying terminal applications or standalone programs that run on local machines or traditional virtual machines. The process involves compiling your Deno application into a single executable file that contains everything needed to run your application. More information about what’s possible with deno compile can be found in the respective chapter.

To compile your Deno application for the AMD64 architecture (Intel or amd CPU), use the following command:

deno compile --target x86_64-unknown-linux-gnu --output my-app --allow-net --allow-read main.ts

If you want to run it on an ARM Mac use the following target

deno compile --target aarch64-apple-darwin --output my-app --allow-net --allow-read main.ts

For an ARM linux machine:

deno compile --target aarch64-unknown-linux-gnu --output my-app --allow-net --allow-read main.ts

Once we have transferred the binary to our target machine, we can execute it directly to run it:

chmod +x ./my-app
./my-app

Given the application code used in the Basic CRUD server we can now visit localhost:8000/books/ and will get an answer from our application.

 curl http://localhost:8000/books/
{"1":"Atomic Habits","2":"The Alchemist"}

To ensure that our application continues to run after a crash or reboot, it’s recommended that we create a systemd service on systems where this is supported. Most popular Linux distributions use systemd as their init system and service manager.

Setting this up is not part of this guide, but there should be plenty of resources on the web to set this up quickly.

Deno Deploy

Deno Deploy stands out as the most streamlined deployment option, specifically designed for hosting Deno applications. Since it is built by the Deno maintainers, it should be the “easiest” place to host your application.

It runs on a specialised runtime that, while different from the standard Deno runtime, maintains full compatibility with Deno applications. This specialised environment uses V8 isolates for enhanced security and remarkably fast startup times.

Setup is as simple as connecting to your GitHub account and going through the setup process. After the initial setup, we can turn on automatic deployments when new code is pushed to the configured branch.

If you are not interested in this type of deployment, or need to perform other tasks before deployment (e.g. running tests, lints, etc.), we can use GitHub Actions and Deno’s deployctl. This CLI tool is also available for terminal use.

Here’s an example GitHub Actions workflow taken from the deployctl repository for automated deployments to Deno Deploy:

name: Deploy

on: push

jobs:
  deploy:
    runs-on: ubuntu-latest

    permissions:
      id-token: write # This is required to allow the GitHub Action to authenticate with Deno Deploy.
      contents: read

    steps:
      - name: Clone repository
        uses: actions/checkout@v4

      # ... Do your tests, lints, formatting here

      - name: Deploy to Deno Deploy
        uses: denoland/deployctl@v1
        with:
          project: my-project # the name of the project on Deno Deploy
          entrypoint: main.ts # the entrypoint to deploy

Deno Deploy is trusted by major companies such as Slack, Netlify and Supabase. Its infrastructure, built on Google Cloud Platform, ensures that your applications run close to your users through edge deployment. The free tier generously includes 1 million requests and 100GB of egress traffic, making it an attractive option for projects of various sizes.

Put Deno in a container

Container deployment offers flexibility and consistency across different environments. With Deno packaged in a container, we can host our applications anywhere we can install Docker or Podman, for example.

Deno provides official images on Docker Hub that can be used as base images for your applications. These images come in two flavours:

  1. Full OS images (Alpine, Debian (default), Distroless or Ubuntu)
  2. Binary only images (bin tag)

Here’s an example Dockerfile for a Deno application:

FROM denoland/deno:ubuntu # or :alpine, :debian, :distroless or :bin

WORKDIR /app
COPY . .

RUN deno cache main.ts

EXPOSE 3000

USER deno

CMD ["run", "--allow-net", "main.ts"]

The official images include a special deno user which promotes security best practices by avoiding root execution. When using these images, note that the debian variant is selected by default if no specific tag is specified.

fly.io Deployment

fly.io provides a unique approach to container deployment, using Firecracker microVMs to achieve exceptional startup performance. While similar to Deno Deploy in its edge deployment capabilities, fly.io offers broader support for containerised applications.

Since we can use Dockerfile to configure our container image build on fly.io, we can use the one defined above.

To deploy our http server on fly.io, we can either use a GitHub repository to define our service, or we can use fly’s flyctl, which can be installed using brew install flyctl.

From there we can run fly launch to start a setup process with fly.io. We will be asked for our service settings. This will look similar to the following.

fly launch
? You must be logged in to do this. Would you like to sign in? Yes
Opening https://fly.io/app/auth/cli/...

Waiting for session... Done
successfully logged in as email@example.com
Scanning source code
Detected a Dockerfile app
Creating app in /Users/user/code/deno-fly
We're about to launch your app on Fly.io. Here's what you're getting:

Organization: Our-Organisation                       (fly launch defaults to the personal org)
Name:         deno-fly-twilight-river-4887 (generated)
Region:       Stockholm, Sweden            (this is the fastest region for you)
App Machines: shared-cpu-1x, 1GB RAM       (most apps need about 1GB of RAM)
Postgres:     <none>                       (not requested)
Redis:        <none>                       (not requested)
Tigris:       <none>                       (not requested)

If we go through all these steps, we’ll get a URL that we can go to, which should show our application. From there we have made it available on the Internet.

After successfully running the fly launch command, we can see that a new fly.toml has been created showing our defined settings. This will be used for every fly deploy in the future when we change our code and want to deploy it to fly.io.

An example fly.toml

app = 'deno-fly-twilight-river-4887'
primary_region = 'arn'

[build]

[http_service]
  internal_port = 8000
  force_https = true
  auto_stop_machines = 'stop'
  auto_start_machines = true
  min_machines_running = 0
  processes = ['app']

[[vm]]
  memory = '1gb'
  cpu_kind = 'shared'
  cpus = 1

This deployment method combines the benefits of containerisation with the performance advantages of microVMs, making it suitable for applications that require both flexibility and speed. This makes it interesting for applications beyond Deno. There is no free tier anymore, but fly.io offers a pay-as-you-go plan. The smallest machine costs around $2/month.

Concluding the deployment of Deno

Each deployment method has its own strengths and the choice between them depends on specific requirements such as scalability needs, deployment frequency and infrastructure preferences. Whether you need the simplicity of binary deployment, the seamless integration of Deno Deploy, the portability of containers or the edge computing capabilities of Fly.io, the Deno ecosystem has a solution to meet your needs.