Blogroll: CloudFlare

I read blogs, as well as write one. The 'blogroll' on this site reproduces some posts from some of the people I enjoy reading. There are currently 35 posts from the blog 'CloudFlare.'

Disclaimer: Reproducing an article here need not necessarily imply agreement or endorsement!

Subscribe to CloudFlare feed CloudFlare
Here at Cloudflare, we make the Internet work the way it should. The Cloudflare Blog provides cutting-edge updates on internet performance, security, reliability, serverless computing, and more.
Updated: 9 min 8 sec ago


5 hours 14 min ago

We are working really hard to allow you to deploy Workers without having a Cloudflare domain. You will soon be able to deploy your Cloudflare Workers to a, which you can go claim now on!

Why are we doing this?

You may have read the announcement blog post for Workers (or one of the many tutorials and guest posts), and thought “let me give this a try!”. If you’re an existing Cloudflare customer, you logged into the dashboard, and found a new icon called “Workers”, paid $5 and were on your way. If you’re not, you clicked “Sign Up”, but instead of getting to create and deploy a Worker, we asked you for your domain (if you didn’t have one, we had you register one), and move your nameservers.

Since launch, we have had tons of people who wanted to build a new serverless project from scratch or just try Workers out, but found it difficult to get started. We want to make it easier for anyone to get started building and deploying serverless applications.

How did we get here?

The way you get started on Workers today reflects our journey as a company. Our mission to “help build a better Internet” has remained consistent, but the ways in which we can help do that have evolved. We didn’t start as a serverless company, we started as out as a way for you to secure and accelerate your web properties.


After the launch of Workers, it immediately became clear that the possibilities for Workers are endless. We truly believe that Workers are a revolutionary technology that will change how developers think about compute.


Our Serverless Mission

We are still on a mission to help build a better Internet. To us, a world with better Internet means a world where there is no distinction between writing code and deploying global applications.

That is, an independent developer can have access to the same scalable infrastructure as a multi-million dollar business to run their code on.

To remove any obstacles preventing developers from using Workers today, we’re going to allow developers to run Workers on subdomains of

As a part of Google’s TLD launch program, we were lucky enough to obtain, to allow customers to run their Workers on.


You can now go to and read about all the benefits of Workers that we’re really excited about.

Additionally, in you will be able to claim a subdomain (one per user) for you to run Workers on. After choosing your subdomain, you will be asked to verify your email. If you already have a Cloudflare account, please use the same email you used to sign up. We will also use the same email to notify you once we’re ready to let you deploy your first Worker to the subdomain you selected.

Note that is fully served using Cloudflare Workers. All of the following is done using Workers:

  • Serving the static site from storage
  • A/B testing our light vs. dark theme
  • The pre-registration API that allows you to claim a subdomain, validates your email, and reserves it for you (more on that soon).

Though we fully utilize Workers, no Workers were harmed in the making of this site.

We look forward to seeing the great things you build with Workers.
Check out to claim your subdomain, and read about the Workers platform.

Categories: Technology

SOCKMAP - TCP splicing of the future

Mon, 18/02/2019 - 13:13
SOCKMAP - TCP splicing of the future

Recently we stumbled upon the holy grail for reverse proxies - a TCP socket splicing API. This caught our attention because, as you may know, we run a global network of reverse proxy services. Proper TCP socket splicing reduces the load on userspace processes and enables more efficient data forwarding. We realized that Linux Kernel's SOCKMAP infrastructure can be reused for this purpose. SOCKMAP is a very promising API and is likely to cause a tectonic shift in the architecture of data-heavy applications like software proxies.

SOCKMAP - TCP splicing of the future

Image by Mustad Marine public domain

But let’s rewind a bit.

Birthing pains of L7 proxies

Transmitting large amounts of data from userspace is inefficient. Linux provides a couple of specialized syscalls that aim to address this problem. For example, the sendfile(2) syscall (which Linus doesn't like) can be used to speed up transferring large files from disk to a socket. Then there is splice(2) which traditional proxies use to forward data between two TCP sockets. Finally, vmsplice can be used to stick memory buffer into a pipe without copying, but is very hard to use correctly.

Sadly, sendfile, splice and vmsplice are very specialized, synchronous and solve only one part of the problem - they avoid copying the data to userspace. They leave other efficiency issues unaddressed.

between avoid user-space memory zerocopy sendfile disk file --> socket yes no splice pipe <--> socket yes yes? vmsplice memory region --> pipe no yes

Processes that forward large amounts of data face three problems:

  1. Syscall cost: making multiple syscalls for every forwarded packet is costly.

  2. Wakeup latency: the user-space process must be woken up often to forward the data. Depending on the scheduler, this may result in poor tail latency.

  3. Copying cost: copying data from kernel to userspace and then immediately back to the kernel is not free and adds up to a measurable cost.

Many tried

Forwarding data between TCP sockets is a common practice. It's needed for:

  • Transparent forward HTTP proxies, like Squid.
  • Reverse caching HTTP proxies, like Varnish or NGINX.
  • Load balancers, like HAProxy, Pen or Relayd.

Over the years there have been many attempts to reduce the cost of dumb data forwarding between TCP sockets on Linux. This issue is generally called “TCP splicing”, “L7 splicing”, or “Socket splicing”.

Let’s compare the usual ways of doing TCP splicing. To simplify the problem, instead of writing a rich Layer 7 TCP proxy, we'll write a trivial TCP echo server.

It's not a joke. An echo server can illustrate TCP socket splicing well. You know - "echo" basically splices the socket… with itself!

Naive: read write loop

The naive TCP echo server would look like:

while data: data = read(sd, 4096) write(sd, data)

Nothing simpler. On a blocking socket this is a totally valid program, and will work just fine. For completeness I prepared full code here.

Splice: specialized syscall

Linux has an amazing splice(2) syscall. It can tell the kernel to move data between a TCP buffer on a socket and a buffer on a pipe. The data remains in the buffers, on the kernel side. This solves the problem of needlessly having to copy the data between userspace and kernel-space. With the SPLICE_F_MOVE flag the kernel may be able to avoid copying the data at all!

Our program using splice() looks like:

pipe_rd, pipe_wr = pipe() fcntl(pipe_rd, F_SETPIPE_SZ, 4096); while n: n = splice(sd, pipe_wr, 4096) splice(pipe_rd, sd, n)

We still need wake up the userspace program and make two syscalls to forward any piece of data, but at least we avoid all the copying. Full source.

io_submit: Using Linux AIO API

In a previous blog post about io_submit() we proposed using the AIO interface with network sockets. Read the blog post for details, but here is the prepared program that has the echo server loop implemented with only a single syscall.

SOCKMAP - TCP splicing of the future

Image by jrsnchzhrs By-Nd 2.0 SOCKMAP: The ultimate weapon

In recent years Linux Kernel introduced an eBPF virtual machine. With it, user-space programs can run specialized, non-turing-complete bytecode in the kernel context. Nowadays it's possible to select eBPF programs for dozens of use cases, ranging from packet filtering, to policy enforcement.

From Kernel 4.14 Linux got new eBPF machinery that can be used for socket splicing - SOCKMAP. It was created by John Fastabend at, exposing the Strparser interface to eBPF programs. Cilium uses SOCKMAP for Layer 7 policy enforcement, and all the logic it uses is embedded in an eBPF program. The API is not well documented, requires root and, from our experience, is slightly buggy. But it's very promising. Read more:

This is how to use SOCKMAP: SOCKMAP or specifically "BPF_MAP_TYPE_SOCKMAP", is a type of an eBPF map. This map is an "array" - indices are integers. All this is pretty standard. The magic is in the map values - they must be TCP socket descriptors.

This map is very special - it has two eBPF programs attached to it. You read it right: the eBPF programs live attached to a map, not attached to a socket, cgroup or network interface as usual. This is how you would set up SOCKMAP in user program:

sock_map = bpf_create_map(BPF_MAP_TYPE_SOCKMAP, sizeof(int), sizeof(int), 2, 0) prog_parser = bpf_load_program(BPF_PROG_TYPE_SK_SKB, ...) prog_verdict = bpf_load_program(BPF_PROG_TYPE_SK_SKB, ...) bpf_prog_attach(bpf_parser, sock_map, BPF_SK_SKB_STREAM_PARSER) bpf_prog_attach(bpf_verdict, sock_map, BPF_SK_SKB_STREAM_VERDICT)

Ta-da! At this point we have an established sock_map eBPF map, with two eBPF programs attached: parser and verdict. The next step is to add a TCP socket descriptor to this map. Nothing simpler:

int idx = 0; int val = sd; bpf_map_update_elem(sock_map, &idx, &val, BPF_ANY);

At this point the magic happens. From now on, each time our socket sd receives a packet, prog_parser and prog_verdict are called. Their semantics are described in the strparser.txt and the introductory SOCKMAP commit. For simplicity, our trivial echo server only needs the minimal stubs. This is the eBPF code:

SEC("prog_parser") int _prog_parser(struct __sk_buff *skb) { return skb->len; } SEC("prog_verdict") int _prog_verdict(struct __sk_buff *skb) { uint32_t idx = 0; return bpf_sk_redirect_map(skb, &sock_map, idx, 0); }

Side note: for the purposes of this test program, I wrote a minimal eBPF loader. It has no dependencies (neither bcc, libelf, or libbpf) and can do basic relocations (like resolving the sock_map symbol mentioned above). See the code.

The call to bpf_sk_redirect_map is doing all the work. It tells the kernel: for the received packet, please oh please redirect it from a receive queue of some socket, to a transmit queue of the socket living in sock_map under index 0. In our case, these are the same sockets! Here we achieved exactly what the echo server is supposed to do, but purely in eBPF.

This technology has multiple benefits. First, the data is never copied to userspace. Secondly, we never need to wake up the userspace program. All the action is done in the kernel. Quite cool, isn't it?

We need one more piece of code, to hang the userspace program until the socket is closed. This is best done with good old poll(2):

/* Wait for the socket to close. Let SOCKMAP do the magic. */ struct pollfd fds[1] = { {.fd = sd, .events = POLLRDHUP}, }; poll(fds, 1, -1);

Full code.

The benchmarks

At this stage we have presented four simple TCP echo servers:

  • naive read-write loop
  • splice
  • io_submit

To recap, we are measuring the cost of three things:

  1. Syscall cost
  2. Wakeup latency, mostly visible as tail latency
  3. The cost of copying data

Theoretically, SOCKMAP should beat all the others:

syscall cost waking up userspace copying cost read write loop 2 syscalls yes 2 copies splice 2 syscalls yes 0 copy (?) io_submit 1 syscall yes 2 copies SOCKMAP none no 0 copies Show me the numbers

This is the part of the post where I'm showing you the breathtaking numbers, clearly showing the different approaches. Sadly, benchmarking is hard, and well... SOCKMAP turned out to be the slowest. It's important to publish negative results so here they are.

Our test rig was as follows:

  • Two bare-metal Xeon servers connected with a 25Gbps network.
  • Both have turbo-boost disabled, and the testing programs are CPU-pinned.
  • For better locality we localized RX and TX queues to one IRQ/CPU each.
  • The testing server runs a script that sends 10k batches of fixed-sized blocks of data. The script measures how long it takes for the echo server to return the traffic.
  • We do 10 separate runs for each measured echo-server program.
  • TCP: "cubic" and NONAGLE=1.
  • Both servers run the 4.14 kernel.

Our analysis of the experimental data identified some outliers. We think some of the worst times, manifested as long echo replies, were caused by unrelated factors such as network packet loss. In the charts presented we, perhaps controversially, skip the bottom 1% of outliers in order to focus on what we think is the important data.

Furthermore, we spotted a bug in SOCKMAP. Some of the runs were delayed by up to whopping 64ms. Here is one of the tests:

Values min:236.00 avg:669.28 med=390.00 max:78039.00 dev:3267.75 count:2000000 Values: value |-------------------------------------------------- count 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3531 512 |************************************************** 1756052 1024 | ***** 208226 2048 | 18589 4096 | 2006 8192 | 9 16384 | 1 32768 | 0 65536 | 11585 131072 | 1

The great majority of the echo runs (of 128KiB in this case) were finished in the 512us band, while a small fraction stalled for 65ms. This is pretty bad and makes comparison of SOCKMAP to other implementations pretty meaningless. This is a second reason why we are skipping 1% of worst results from all the runs - it makes SOCKMAP numbers way more usable. Sorry.

2MiB blocks - throughput

The fastest of our programs was doing ~15Gbps over one flow, which seems to be a hardware limit. This is very visible in the first iteration, which shows the throughput of our echo programs.

This test shows: Time to transmit and receive 2MiB blocks of data, via our tested echo server. We repeat this 10k times, and run the test 10 times. After stripping the worst 1% numbers we get the following latency distribution:

SOCKMAP - TCP splicing of the future

This charts shows that both naive read+write and io_submit programs were able to achieve 1500us mean round trip time for TCP echo server of 2MiB blocks.

Here we clearly see that splice and SOCKMAP are slower than others. They were CPU-bound and unable to reach the line rate. We have raised the unusual splice performance problems in the past, but perhaps we should debug it one more time.

For each server we run the tests twice: without and with SO_BUSYPOLL setting. This setting should remove the "wakeup latency" and greatly reduce the jitter. The results show that naive and io_submit tests are almost identical. This is perfect! BUSYPOLL does indeed reduce the deviation and latency, at a cost of more CPU usage. Notice that neither splice nor SOCKMAP are affected by this setting.

16KiB blocks - wakeup time

Our second run of tests was with much smaller data sizes, sending tiny 16KiB blocks at a time. This test should illustrate the "wakeup time" of the tested programs.

SOCKMAP - TCP splicing of the future

In this test the non-BUSYPOLL runs of all the programs look quite similar (min and max values), with SOCKMAP being the exception. This is great - we can speculate the wakeup time is comparable. Surprisingly, the splice has slightly better median time from others. Perhaps this can be explained by CPU artifacts, like having better CPU cache locality due to less data copying. SOCKMAP is again, slowest with worst max and median times. Boo.

Remember we truncated the worst 1% of the data - we artificially shortened the "max" values.


In this blog post we discussed the theoretical benefits of SOCKMAP. Sadly, we noticed it's not ready for prime time yet. We compared it against splice, which we noticed didn't benefit from BUSYPOLL and had disappointing performance. We noticed that the naive read/write loop and iosubmit approaches have exactly the same performance characteristics and do benefit from BUSYPOLL to reduce jitter (wakeup time).

If you are piping data between TCP sockets, you should definitely take a look at SOCKMAP. While our benchmarks show it's not ready for prime time yet, with poor performance, high jitter and a couple of bugs, it's very promising. We are very excited about it. It's the first technology on Linux that truly allows tahe user-space process to offload TCP splicing to the kernel. It also has potential for much better performance than other approaches, ticking all the boxes of being async, kernel-only and totally avoiding needless copying of data.

This is not everything. SOCKMAP is able to pipe data across multiple sockets - you can imagine a full mesh of connections being able to send data to each other. Furthermore it exposes the strparser API, which can be used to offload basic application framing. Combined with kTLS you can combine it with transparent encryption. Furthermore, there are rumors of adding UDP support. The possibilities are endless.

Recently the kernel has been exploding with eBPF innovations. It seems like we've only just scratched the surface of the possibilities exposed by the modern eBPF interfaces.

Many thanks to Jakub Sitnicki for suggesting SOCKMAP in the first place, writing the proof of concept and now actually fixing the bugs we found. Go strong Warsaw office!

Categories: Technology

Introducing Cf-Terraform

Fri, 15/02/2019 - 20:02

Ever since we implemented support for configuring Cloudflare via Terraform, we’ve been steadily expanding the set of features and services you can manage via this popular open-source tool.

If you're unfamiliar with how Terraform works with Cloudflare, check out our developer docs.

We are Terraform users ourselves, and we believe in the stability and reproducibility that can be achieved by defining your infrastructure as code.

What is Terraform?

Terraform is an open-source tool that allows you to describe your infrastructure and cloud services (think virtual machines, servers, databases, network configurations, Cloudflare API resources, and more) as human-readable configurations.

Once you’ve done this, you can run the Terraform command-line tool and it will figure out the difference between your desired state and your current state, and make the API calls in the background necessary to reconcile the two.

Unlike other solutions, Terraform does not require you to run software on your hosts, and instead of spending time manually configuring machines, creating DNS records, and specifying Page Rules, you can simply run:

terraform apply

and the state described in your configuration files will be built for you.

Enter Cloudflare Terraforming

Terraform is a tremendous time-saver once you have your configuration files in place, but what do you do if you’re already a Cloudflare user and you need to convert your particular setup, records, resources and rules into Terraform config files in the first place?

Today, we’re excited to share a new open-source utility to make the migration of even complex Cloudflare configurations into Terraform simple and fast.

It’s called cf-terraforming and it downloads your Cloudflare setup, meaning everything you’ve defined via the Cloudflare dashboard and API, into Terraform-compliant configuration files in a few commands.

Getting up and running quickly

Cf-terraforming is open-source and available on Github now. You need a working Golang installation and a Cloudflare account with some resources defined. That’s it!

Let’s first install cf-terraforming, while also pulling down all dependencies and updating them as necessary:

$ go get -u

Cf-terraforming is a command line tool that you invoke with your Cloudflare credentials, some zone information and the resource type that you want to export. The output is a valid Terraform configuration file describing your resources.

To use cf-terraforming, first get your API key and Account ID from the Cloudflare dashboard. You can find your account id at the bottom right of the overview page for any zone in your account. It also has a quick link to get your API key as well. You can store your key and account ID in environment variables to make it easier to work with the tool:

export CLOUDFLARE_TOKEN=”<your-key>” export CLOUDFLARE_EMAIL=”<your-email>” export CLOUDFLARE_ACCT_ID=”<your-id>”

Cf-terraforming can create configuration files for any of the resources currently available in the official Cloudflare Terraform provider, but sometimes it’s also handy to export individual resources as needed.

Let’s say you’re migrating your Cloudflare configuration to Terraform and you want to describe your Spectrum applications. You simply call cf-terraforming with your credentials, zone, and the spectrum_application command, like so:

go run cmd/cf-terraforming/main.go --email $CLOUDFLARE_EMAIL --key $CLOUDFLARE_TOKEN --account $CLOUDFLARE_ACCT_ID spectrum_application

Cf-terraforming will contact the Cloudflare API on your behalf and define your resources in a format that Terraform understands:

resource"cloudflare_spectrum_application""1150bed3f45247b99f7db9696fffa17cbx9" { protocol = "tcp/8000" dns = { type = "CNAME" name = "" } ip_firewall = "true" tls = "off" origin_direct = [ "tcp://", ] }

You can redirect the output to a file and then start working with Terraform. First, ensure you are in the cf-terraforming directory, then run:

go run cmd/cf-terraforming/main.go --email $CLOUDFLARE_EMAIL --key $CLOUDFLARE_TOKEN --account $CLOUDFLARE_ACCT_ID spectrum_application >

The same goes for Zones, DNS records, Workers scripts and routes, security policies and more.

Which resources are supported?

Currently cf-terraforming supports every resource type that you can manage via the official Cloudflare Terraform provider:

Get involved

We’re looking for feedback and any issues you might encounter while getting up and running with cf-terraforming. Please open any issues against the GitHub repo.

Cf-terraforming is open-source, so if you want to get involved feel free to pick up an open issue or make a pull request.

Looking forward

We’ll continue to expand the set of Cloudflare resources that you can manage via Terraform, and that you can export via cf-terraforming. Be sure to keep and eye on the cf-terraforming repo for updates.

Categories: Technology

SEO Best Practices with Cloudflare Workers, Part 2: Implementing Subdomains

Fri, 15/02/2019 - 17:09
Recap Implementing Subdomains

In Part 1, the merits and tradeoffs of subdirectories and subdomains were discussed.  The subdirectory strategy is typically superior to subdomains because subdomains suffer from keyword and backlink dilution.  The subdirectory strategy more effectively boosts a site's search rankings by ensuring that every keyword is attributed to the root domain instead of diluting across subdomains.

Subdirectory Strategy without the NGINX

In the first part, our friend Bob set up a hosted Ghost blog at that he connected to using a CNAME DNS record.  But what if he wanted his blog to live at to gain the SEO advantages of subdirectories?

A reverse proxy like NGINX is normally needed to route traffic from subdirectories to remotely hosted services.  We'll demonstrate how to implement the subdirectory strategy with Cloudflare Workers and eliminate our dependency on NGINX. (Cloudflare Workers are serverless functions that run on the Cloudflare global network.)

Back to Bobtopia

Let's write a Worker that proxies traffic from a subdirectory – – to a remotely hosted platform –  This means that if I go to, I should see the content of, but my browser should still think it's on

Configuration Options

In the Workers editor, we'll start a new script with some basic configuration options.

// keep track of all our blog endpoints here const myBlog = { hostname: "", targetSubdirectory: "/articles", assetsPathnames: ["/public/", "/assets/"] }

The script will proxy traffic from myBlog.targetSubdirectory to Bob's hosted Ghost endpoint, myBlog.hostname.  We'll talk about myBlog.assetsPathnames a little later.

 Implementing SubdomainsRequests are proxied from to (Uh oh... is because the hosted Ghost blog doesn't actually exist)

Request Handlers

Next, we'll add a request handler:

async function handleRequest(request) { return fetch(request) } addEventListener("fetch", event => { event.respondWith(handleRequest(event.request)) })

So far we're just passing requests through handleRequest unmodified.  Let's make it do something:

async function handleRequest(request) { ... // if the request is for blog html, get it if (requestMatches(myBlog.targetSubdirectory)) { console.log("this is a request for a blog document", parsedUrl.pathname) const targetPath = formatPath(parsedUrl) return fetch(`https://${myBlog.hostname}/${targetPath}`) } ... console.log("this is a request to my root domain", parsedUrl.pathname) // if its not a request blog related stuff, do nothing return fetch(request) } addEventListener("fetch", event => { event.respondWith(handleRequest(event.request)) })

In the above code, we added a conditional statement to handle traffic to myBlog.targetSubdirectory.  Note that we've omitted our helper functions here.  The relevant code lives inside the if block near the top of the function. The requestMatches helper checks if the incoming request contains targetSubdirectory.  If it does, a request is made to myBlog.hostname to fetch the HTML document which is returned to the browser.

When the browser parses the HTML, it makes additional asset requests required by the document (think images, stylesheets, and scripts).  We'll need another conditional statement to handle these kinds of requests.

// if its blog assets, get them if ([myBlog.assetsPathnames].some(requestMatches)) { console.log("this is a request for blog assets", parsedUrl.pathname) const assetUrl = request.url.replace(parsedUrl.hostname, myBlog.hostname); return fetch(assetUrl) }

This similarly shaped block checks if the request matches any pathnames enumerated in myBlog.assetPathnames and fetches the assets required to fully render the page.  Assets happen to live in /public and /assets on a Ghost blog.  You'll be able to identify your assets directories when you fetch the HTML and see logs for scripts, images, and stylesheets.

 Implementing SubdomainsLogs show the various scripts and stylesheets required by Ghost live in /assets and /public

The full script with helper functions included is:

// keep track of all our blog endpoints here const myBlog = { hostname: "", targetSubdirectory: "/articles", assetsPathnames: ["/public/", "/assets/"] } async function handleRequest(request) { // returns an empty string or a path if one exists const formatPath = (url) => { const pruned = url.pathname.split("/").filter(part => part) return pruned && pruned.length > 1 ? `${pruned.join("/")}` : "" } const parsedUrl = new URL(request.url) const requestMatches = match => new RegExp(match).test(parsedUrl.pathname) // if its blog html, get it if (requestMatches(myBlog.targetSubdirectory)) { console.log("this is a request for a blog document", parsedUrl.pathname) const targetPath = formatPath(parsedUrl) return fetch(`https://${myBlog.hostname}/${targetPath}`) } // if its blog assets, get them if ([myBlog.assetsPathnames].some(requestMatches)) { console.log("this is a request for blog assets", parsedUrl.pathname) const assetUrl = request.url.replace(parsedUrl.hostname, myBlog.hostname); return fetch(assetUrl) } console.log("this is a request to my root domain",, parsedUrl.pathname); // if its not a request blog related stuff, do nothing return fetch(request) } addEventListener("fetch", event => { event.respondWith(handleRequest(event.request)) }) Caveat

There is one important caveat about the current implementation that bears mentioning. This script will not work if your hosted service assets are stored in a folder that shares a name with a route on your root domain.  For example, if you're serving assets from the root directory of your hosted service, any request made to the home page will be masked by these asset requests, and the home page won't load.

The solution here involves modifying the blog assets block to handle asset requests without using paths.  I'll leave it to the reader to solve this, but a more general solution might involve changing myBlog.assetPathnames to myBlog.assetFileExtensions, which is a list of all asset file extensions (like .png and .css).  Then, the assets block would handle requests that contain assetFileExtensions instead of assetPathnames.


Bob is now enjoying the same SEO advantages as Alice after converting his subdomains to subdirectories using Cloudflare Workers.  Bobs of the world, rejoice!

Categories: Technology

SEO Best Practices with Cloudflare Workers, Part 1: Subdomain vs. Subdirectory

Fri, 15/02/2019 - 17:09
Subdomain vs. Subdirectory: 2 Different SEO Strategies Subdomain vs. Subdirectory

Alice and Bob are budding blogger buddies who met up at a meetup and purchased some root domains to start writing.  Alice bought and Bob scooped up

Alice and Bob decided against WordPress because its what their parents use and purchased subscriptions to a popular cloud-based Ghost blogging platform instead.

Bob decides his blog should live at at – a subdomain of Alice keeps it old school and builds hers at – a subdirectory of

 Subdomain vs. Subdirectory

Subdomains and subdirectories are different strategies for instrumenting root domains with new features (think a blog or a storefront).  Alice and Bob chose their strategies on a whim, but which strategy is technically better?  The short answer is, it depends. But the long answer can actually improve your SEO.  In this article, we'll review the merits and tradeoffs of each. In Part 2, we'll show you how to convert subdomains to subdirectories using Cloudflare Workers.

Setting Up Subdomains and Subdirectories

Setting up subdirectories is trivial on basic websites.  A web server treats its subdirectories (aka subfolders) the same as regular old folders in a file system.  In other words, basic sites are already organized using subdirectories out of the box.  No set up or configuration is required.

 Subdomain vs. Subdirectory

In the old school site above, we'll assume the blog folder contains an index.html file. The web server renders blog/index.html when a user navigates to the subdirectory.  But Alice and Bob's sites don't have a blog folder because their blogs are hosted remotely – so this approach won't work.

On the modern Internet, subdirectory setup is more complicated because the services that comprise a root domain are often hosted on machines scattered across the world.

Because DNS records only operate on the domain level, records like CNAME have no effect on a url like – and because her blog is hosted remotely, Alice needs to install NGINX or another reverse proxy and write some configuration code that proxies traffic from to her hosted blog. It takes time, patience, and experience to connect her domain to her hosted blog.

 Subdomain vs. SubdirectoryA location block in NGINX is necessary to proxy traffic from a subdirectory to a remote host

Bob's subdomain strategy is the easier approach with his remotely hosted blog.  A DNS CNAME record is often all that's required to connect Bob's blog to his subdomain.  No additional configuration is needed if he can remember to pay his monthly subscription.

 Subdomain vs. SubdirectoryConfiguring a DNS record to point a hosted service at your blog subdomain

To recap, subdirectories are already built into simple sites that serve structured content from the same machine, but modern sites often rely on various remote services.  Subdomain set up is comparatively easy for sites that take advantage of various hosted cloud-based platforms.

Are Subdomains or Subdirectories Better for SEO?

Subdomains are neat. If you ask me, is more appealing than But if we want to make an informed decision about the best strategy, where do we look?  If we're interested in SEO, we ought to consult the Google Bot.

Subdomains and subdirectories are equal in the eyes of the Google Bot, according to Google itself.  This means that Alice and Bob have the same chance at ranking in search results.  This is because Alice's root domain and Bob's subdomain build their own sets of keywords.  Relevant keywords help your audience find your site in a search. There is one important caveat to point out for Bob:

A subdomain is equal and distinct from a root domain.  This means that a subdomain's keywords are treated separately from the root domain.

What does this mean for Bob?  Let's imagine is already a popular online platform for folks named Bob to seek kinship with other Bobs.  In this peculiar world, searches that rank for wouldn't automatically rank for because each domain has its own separate keywords.  The lesson here is that keywords are diluted across subdomains.  Each additional subdomain decreases the likelihood that any particular domain ranks in a given search.  A high ranking subdomain does not imply your root domain ranks well.

 Subdomain vs. SubdirectoryIn a search for "Cool Blog", suffers from keyword dilution. It doesn't rank because its blog keyword is owned by

Subdomains also suffer from backlink dilution.  A backlink is simply a hyperlink that points back to your site. Alice's attribution to a post on the etymology of Bob from does not help because the subdomain is treated separate but equal from the root domain.  If Bob used subdirectories instead, Bob's blog posts would feed the authority of and Bobs everywhere would rejoice.

 Subdomain vs. SubdirectoryThe authority of is increased when Alice links to Bob's interesting blog post, but the authority of is not affected.

Although search engines have improved at identifying subdomains and attributing keywords back to the root domain, they still have a long way to go.  A prudent marketer would avoid risk by assuming search engines will always be bad at cataloguing subdomains.

So when would you want to use subdomains?  A good use case is for companies who are interested in expanding into foreign markets.  Pretend is an American company whose website is in English.  Their English keywords won't rank well in German searches – so they translate their site into German to begin building new keywords on Erfolg!

Other use cases for subdomains include product stratification (think global brands with presence across many markets) and corporate internal tools (think productivity and organization tools that aren't user facing).  But unless you're a huge corporation or just finished your Series C round of funding, subdomaining your site into many silos is not helping your SEO.


If you're a startup or small business looking to optimize your SEO, consider subdirectories over subdomains.  Boosting the authority of your root domain should be a universal goal of any organization. The subdirectory strategy concentrates your keywords onto a single domain while the subdomain strategy spreads your keywords across multiple distinct domains. In a word, the subdirectory strategy results in better root domain authority. Higher domain authority leads to better search rankings which translates to more engagement.

Consider the multitude of disruptive PaaS startups with and  Why not switch to and to boost the authority of your root domain with all those docs searches and StackOverflow backlinks?

Want to Switch Your Subdomains to Subdirectories?

Interested in switching your subdomains to subdirectories without a reverse proxy? In Part 2, we'll show you how using Cloudflare Workers.

Categories: Technology

Solving Problems with Serverless – The Cloudflare LED Data Center Board, Part I

Thu, 14/02/2019 - 20:00
Solving Problems with Serverless – The Cloudflare LED Data Center Board, Part I

You know you have a cool job when your first project lets you bring your hobby into the office.

That’s what happened to me just a few short weeks ago when I joined Cloudflare. The task: to create a light-up version of our Data Center map – we’re talking more than a hundred LEDs tied to the deployment state of each and every Cloudflare data center. This map will be a part of our booths, so it has to be able to travel; meaning we have to consider physical shipping and the ability to update the data when the map is away from the office. And the fun part – we are debuting it at SF Developer Week in late February (I even get to give a talk about it!) That gave me one week of software time in our San Francisco office, and a little over two and a half in the Austin office with the physical materials.

Solving Problems with Serverless – The Cloudflare LED Data Center Board, Part IWhat the final LEDs will look like on a map of the world.

So what does this have to do with Serverless? Well, let’s think about where and how this map will need to operate: This will be going to expo halls and conferences, and we want it to update to show our most current data center statuses for at least that event, if not updating once a day. But we don’t need to stay connected to the information store constantly-- nor should we expect to, over conference or expo WiFi.

Data Stored about Data Centers

The data stored about each data center has two distinct categories; data relevant to the data center itself, and data about how that data center should be rendered on the map. These are relatively simple data structures, however: for a data center, we want to store what city the data center is in, the latitude and longitude, and the status. We arbitrarily assign an ID integer to each data center, which we'll use to match this data with the data in the other store. We’re not going to pick and choose which data centers we want; just pull them all down and let the microcontroller figure out how to display them.

Speaking of, this is where the data store relevant to the display comes in. LEDs are on strands numbered 0-7, and are represented by an LED numbered 0-63. We need to store the ID of the data center we created for the first store, the strand number, and the LED number in the strand.

Both of these sets of data can be stored in a key-value store, with the ID number as the key and a JSON object representing either the data center or its representative LED on the map as the value. Because of this, coupled with the fact that we do not need to search or index this data, we decided to use Workers KV data stores to keep this information.

The Data Center and Data Center Map API

We needed two APIs around the data centers, and we needed them fast– both in the more immediate sense of having only a few weeks to complete the project, and in the sense that we needed the data to download relatively quickly over non-ideal internet situations. We also know this map will be traveling all over the world– we'd need the API to work and have at least a decent latency no matter where the map was geographically.

This is where the hundreds of LEDs comes in handy– each one represents a data center that we could deploy serverless Workers to. We can deploy the API to the data center before we leave from the comfort of the office, and it'd be ready for us when we hit the conference floor. Workers also, unsurprisingly, work really well with Workers KV data stores; allowing us to rapidly develop APIs around our data.

Our Software Architecture DiagramSolving Problems with Serverless – The Cloudflare LED Data Center Board, Part I

In the end, we ended up with this architecture diagram; 2 Workers KV data stores, and 2 serverless Workers; all of which can be deployed across the world in order to make sure the physical map has the updated data every time we head to a new show.

In the next post in this series, we'll take a look at the physical architecture of the sign:

Solving Problems with Serverless – The Cloudflare LED Data Center Board, Part I

We'll also take a look at the system we built that uses the architecture laid out in this post that consumes this data and turns it into an LED map – so keep an eye out for it next month!

Categories: Technology

Winning the Blackbird Battle

Thu, 14/02/2019 - 18:03
Winning the Blackbird BattleUS Court of Appeals for the Federal CircuitWinning the Blackbird Battle

Frequent readers of the Cloudflare blog are aware of the efforts we’ve undertaken in response to our first encounter with a patent troll. We’re happy to report that on Wednesday, the U.S. Court of Appeals for the Federal Circuit issued an opinion affirming a lower court decision dismissing the case brought by Blackbird Tech. This is the last step in the process 1, we’ve won.

Winning the Blackbird Battle

In addition to vigorously opposing this case in court, we created and sponsored Project Jengo to push back against the incentives that empower patent trolls like Blackbird Tech. Now that the case is over, we will be wrapping up Project Jengo and will report back with a summary of the Project’s successes in the near future.

But before we move on from the litigation, I want to share a few reflections on this case.

We noted from the very beginning: “The infringement claim is not a close one … if the ‘335 patent is read broadly enough to cover our system (which shouldn’t happen), it would also cover any system where electronic communications are examined and redacted or modified.”

Winning the Blackbird Battle

Our initial observation, which we thought was obvious, was borne out. And we were able to prevail on our arguments as swiftly and cleanly as is possible in the federal court system. The U.S. District Court resolved the case on a motion to dismiss, meaning the court didn’t even consider the factual circumstances of the case, but looked at the face of the complaint and the language of the patent itself and determined that the claims in the patent were too vague to allow anyone to enforce it. It was so obvious to the court that Judge Chabbria’s decision was little more than a single page. You can read our discussion of the lower court decision in a previous blog.

Yet Blackbird appealed that dismissal to the U.S. Court of Appeals for the Federal Circuit, a specialized court based in Washington, DC that hears appeals of all patent cases. A panel of three judges from that court heard arguments on the appeal last Friday, but didn’t ask our attorney a single question about the substance of our argument on the abstractness of the patent. He sat down with almost half of his 15 minutes of argument time left because there was nothing more to say. Yesterday, just three business days after that hearing, the court affirmed the lower court’s decision in summary fashion, which means they didn’t even write about the claims or arguments, they just said “Affirmed” (see below).  

If it were a boxing match, it would have been called by the referee in the first minute of the first round after three knockdowns. It was easy and painless, right?  

Not at all.  

Blackbird filed this case in March 16, 2017. For nearly two years, anyone doing due diligence on Cloudflare might have had questions about whether there was a cloud over our rights to our technology. And we had to go through a lot of briefing, and the related legal expenses, to get to this point. Blackbird’s combined legal filings at the district court and appellate court amounted to more than 650 pages, our responsive briefs were more than 900 pages.

The two courts spent less than two pages describing a result that was obvious to them, but it took us two years of uncertainty and cost to get there. Federal court litigation doesn’t make anything easy. Even if Blackbird had won the case, it is not clear they would have been able to collect significant damages. Our allegedly infringing use was not a product or feature that we charged for or made money from – it was essentially posting interstitial messages for various errors. Even though we were able to win this case early in the legal process and keep our costs as low as possible, it’s possible we spent more money resolving this matter than Blackbird would have been able to collect from us after trial.

This is the dysfunction that makes patent trolling possible. It is why the option for a quick settlement, in the short term, is always so appealing to parties sued by patent trolls. It’s why we exerted efforts on Project Jengo to try and change the long-term calculus and help out others in the community who may soon find themselves in a similar predicament.  

A final note…

Anthony Garza and Steven Callahan of Charhon Callahan Robson & Garza, a litigation boutique in Dallas, are great lawyers. They provided exceptional counseling, perfect legal work, and strong arguments at every step of this process. In every hearing, I was extremely happy that Anthony was the lawyer on our side -- he demonstrated complete control of both the relevant legal authorities and the intricate details of the patent and its various claims. He had the advantage of being right, but he never left any doubt who was making the better argument.  My thanks for their hard work and guidance.

Winning the Blackbird BattleWinning the Blackbird BattleWinning the Blackbird Battle


[1] Yes, I am aware that Blackbird could seek discretionary review of this decision at the U.S. Supreme Court. But the Supreme Court accepts less than 5% of the cases presented to it, so there isn’t even a request to that court in most cases unless the case is particularly significant or reflects a disagreement among federal courts. I don’t reasonably expect they would take this case.

Categories: Technology

The Curious Case of the Garden State Imposter

Wed, 13/02/2019 - 22:44
The Curious Case of the Garden State ImposterThe Curious Case of the Garden State Imposter

Dealing with abuse complaints isn’t easy, for any Internet company. The variety of subject matters at issue, the various legal and regulatory requirements, and the uncertain intentions of complaining parties combine to create a ridiculously complex situation.  We often suggest to those who propose easy answers to this challenge that they spend a few hours tracking the terminal of a member of our Trust & Safety team to get a feel for how difficult it can be. Yet even we were a bit surprised by an unusual abuse report we’ve been dealing with recently.

Last week, we received what looked like a notable law enforcement request: a complaint from an entity that identified itself as the “New Jersey Office of the Attorney General” and claimed to be a notice Cloudflare was “serving files consisting of 3D printable firearms in violation of NJ Stat. Ann. § 2C:39-9 3(I)(2).”  The complaint further asked us to “delete all files described within 24 hours” and threatened “to press charges in order to preserve the safety of the citizens of New Jersey.”

Because we are generally not the host of information, and are unable to remove content from the Internet that we don’t host, our abuse process is specifically set up to forward complaints about content to the website host. Cloudflare also provides the contact information for the hosting provider to the person filing the complaint so that they can address their report with the host of the content in question. That is what we did in this case.

We took no action with respect to the underlying allegation. As a preliminary matter, we confirmed we were not hosting the allegedly infringing content, and any action we might have taken would not have impacted the availability of the content online. Perhaps even more importantly, in order for an Internet infrastructure provider like Cloudflare to take action on content, we believe due process requires more than a threat of legal action.

Complaint Oddities

A few days after we forwarded the complaint, we saw news reports indicating that the website operator and a number of other entities had sued the State of New Jersey over the complaint we had forwarded. That lawsuit prompted us to take a closer look at the complaint. We immediately noticed a few anomalies with the complaint.

First, when law enforcement agencies contact us, they typically reach out directly, through a dedicated email line. Indeed, we specifically encourage law enforcement to contact us directly on our abuse page, because it facilitates a personalized review and response. The NJ-related request did not come in through this channel, but was instead submitted through our general abuse form. This was one data point that raised our skepticism as to the legitimacy of this report.

Second, the IP address linked to the complaint was geo-located to the Slovak Republic, which seemed like an unlikely location for the New Jersey Attorney General to be submitting an abuse report from. This particular data point was a strong indicator that this might be a fraudulent report.

Third, while the contact information provided in the complaint appeared to be a legitimate, publicly available email address operated by the State of NJ, it was one intended for public reporting of tips of criminal misconduct, as advertised here. It seems unlikely that a state attorney general would use such an email to threaten criminal prosecution. On occasion, we see this technique used when an individual would like to have Cloudflare’s response to an abuse report sent to some type of presumably interested party. The person filing this misattributed abuse report likely hopes that the party who controls that email address will then initiate some type of investigation or action based on that abuse report.

All of these factors — which were all part of the complaint passed on to the website owner and operator — made us skeptical that the complaint was legitimate. Nonetheless, we observed that the New Jersey Attorney General’s office was aware of and participating in the litigation. This raised questions about our skepticism about the complaint’s legitimacy, and made us believe that individuals from New Jersey were likely to contact us.  

On Friday, we were contacted by the New Jersey Attorney General’s office, and in response to a request, including legal process, we provided additional information about the complaint. Yesterday, the New Jersey Attorney General’s office solved the mystery for us in a submission to the court confirming the complaint was a fake.

We have investigated other abuse reports submitted from this IP address, and we have identified a clear pattern of fake abuse reports. To be clear, this IP address has never impersonated law enforcement individuals prior to this NJ-related report. We have taken steps to block this IP address from submitting any further fake abuse reports.

Why does a fake complaint matter?

Abusing the abuse process by filing fake abuse reports can be a highly effective way to silence speech on the Internet. It is effectively a form of a denial of service attack. A fake abuse report can potentially result in a hosting provider taking their customer offline based on an unconfirmed allegation. In certain contexts such as copyright claims, the hosting provider is incentivized to act first and then ask questions later so as to reduce their potential liability as the host of the problematic content. The hosting provider’s sense of urgency to block the identified content leads to the sinister effectiveness of a fake abuse complaint. The content owner can submit a counter-notice to have access to the content restored, but that can be a daunting task if the potentially fake abuse report was sent by a well-funded organization or by law enforcement.

YouTube has recently been targeted by exactly this problem as recently reported by The Verge. Bad actors are abusing their “copyright strikes” system by sending ransom demands to seemingly innocent content creators. This type of attack can best be summarized as “pay me or I’ll file an abuse complaint and get you taken down”.

We don’t know who submitted the complaint or what their motivation might have been, but the incident does remind us of the importance of proceeding carefully when we receive complaints and requests from law enforcement.  Dealing with abuse complaints and requests from law enforcement is never easy. And although many complaints are legitimate, this complaint was a good reminder that at least some legal demands are just attempts to game our abuse process. We’ll continue to explore ways of minimizing the possibility that our abuse process can itself be abused by bad actors.  

Categories: Technology

Introducing The Serverlist: Cloudflare's New Serverless Newsletter

Tue, 12/02/2019 - 21:27
 Cloudflare's New Serverless Newsletter

At Cloudflare, we've been digging our heels into serverless, so we created The Serverlist newsletter, enabling us to share interesting content in the serverless space with the developer community. The Serverlist newsletter highlights all things serverless, with content that covers news from the serverless world, tutorials to learn how to get involved yourself, and different events you can attend.

Check out our first edition of The Serverlist below and sign up here to have this newsletter delivered to your inbox.

.newsletter .visually-hidden { position: absolute; white-space: nowrap; width: 1px; height: 1px; overflow: hidden; border: 0; padding: 0; clip: rect(0 0 0 0); clip-path: inset(50%); } .newsletter form { display: flex; flex-direction: row; margin-bottom: 1em; } .newsletter input[type="email"], .newsletter button[type="submit"] { font: inherit; line-height: 1.5; padding-top: .5em; padding-bottom: .5em; border-radius: 3px; } .newsletter input[type="email"] { padding-left: .8em; padding-right: .8em; margin: 0; margin-right: .5em; box-shadow: none; border: 1px solid #ccc; } .newsletter input[type="email"]:focus { border: 1px solid #3279b3; } .newsletter button[type="submit"] { padding-left: 1.25em; padding-right: 1.25em; background-color: #f18030; color: #fff; } .newsletter .privacy-link { font-size: .9em; } Email Submit Your privacy is important to us newsletterForm.addEventListener('submit', async function(e) { e.preventDefault() fetch('', { method: 'POST', body: newsletterForm.elements[0].value }).then(async res => { const thing = await res.text() newsletterForm.innerHTML = thing const homeURL = '' if (window.location.href !== homeURL) { window.setTimeout(_ => { window.location = homeURL }, 5000) } }) })

iframe[seamless]{ background-color: transparent; border: 0 none transparent; padding: 0; overflow: hidden; } const magic = document.getElementById('magic') function resizeIframe() { const iframeDoc = magic.contentDocument const iframeWindow = magic.contentWindow magic.height = iframeDoc.body.clientHeight const injectedStyle = iframeDoc.createElement('style') injectedStyle.innerHTML = ` body { background: white !important; } ` magic.contentDocument.head.appendChild(injectedStyle) function onFinish() { setTimeout(() => { = '' }, 80) } if (iframeDoc.readyState === 'loading') { iframeWindow.addEventListener('load', onFinish) } else { onFinish() } } async function fetchURL(url) { magic.addEventListener('load', resizeIframe) const call = await fetch(`${url}`) const text = await call.text() magic.srcdoc = text } fetchURL("")
Categories: Technology

IBM Cloud Internet Services protects any cloud – now with Cloudflare Spectrum and Workers

Mon, 11/02/2019 - 23:00

At Cloudflare, we have an ambitious mission of helping to build a better Internet. Partnerships are a core part of how we achieve this mission. Last year we joined forces with IBM. Their expertise and deep relationships with the world's largest organizations are highly complementary with Cloudflare's cloud-native, API-first architecture that provides superior security, performance, and availability for Internet-facing workloads.  Our shared goal of enabling and supporting a hybrid and multi-cloud world is becoming a greater component of our combined message to the market.

As we prepare for the IBM Think customer conference in San Francisco this week, the Cloudflare team is excited about the opportunities ahead. We closed 2018 with momentum, bringing several of the world’s leading brands onto the Cloud Internet Services (CIS) platform in 2018. Customers have used CIS for several purposes, including:

  • The CIS Global Load Balancer provides high availability across IBM Cloud regions for customers in Europe, North America, and Latin America
  • CIS caching capabilities have ensured availability and performance for world spectator events with high traffic spikes
  • The CIS authoritative DNS delivers greater availability and performance for Internet-facing workloads supporting thousands of developers

At Think, please visit Cloudflare at our booth (#602). In addition, you may want to visit these interesting discussions:

Simplifying Enterprise Workloads with Modern Cloud Infrastructure

Hear from Satinder Sethi, GM of IBM Cloud Infrastructure, and Cloudflare’s CEO Matthew Prince, discuss the trade-offs of current and emerging technologies for infrastructure design, particularly for organizations operating in a hybrid or multi-cloud environment.

Securing the Network Edge for Your Cloud Apps

An overview of the benefits of IBM Cloud Internet Services, seamlessly integrated with IBM Cloud platform security controls.

How Uses Edge Computing to Create the Future of Web Performance

Find out how, after years of iteration, the team at The Weather Company has found a unique way of using edge computing to assemble heterogeneous website resources into a single, highly reliable and performant site.

We’re also thrilled to support IBM Cloud’s announcement of the addition of new Cloudflare capabilities to the CIS suite of security and performance features. These capabilities make the CIS offering even more useful to IBM’s customers who desire to make their Internet-facing applications and APIs secure, resilient, and fast.

Here is a short list of use cases for the new features that IBM is releasing in the CIS Enterprise Plan:

1)    Enable robust analytics and forensics

We are introducing CIS Enterprise Log Share which enables CIOs and security leaders to ingest log data into their data repository, business intelligence tool, and/or Security Incident Event Management system of choice. This will enable correlation analysis to develop insights about attacks.

2)    Protect and accelerate non-HTTP traffic

Many of our joint customers run applications that utilize non-HTTP traffic. To serve these customers, Cloud Internet Services Range allows the mitigation of Layer 3 and 4 volumetric DDoS attacks to keep all TCP and UDP-based services online and secure across ports that run TCP and UDP protocols.  It will also protect these ports from data snooping and theft by encrypting traffic with SSL/TLS.

3)    Run custom logic on the edge

Serverless computing is increasingly attractive to application developers for a variety of reasons:  lower latency, more granular control, greater security, and savings on store and compute resources.

With CIS Edge Functions, now available in beta, Customers can realize serverless computing benefits by running complex logic and code at the network edge. IBM Cloud Internet Services Edge Functions allows JavaScript to be deployed on the Cloudflare network, enabling customers to deploy their code to over 165 data centers around the world in less than 30 seconds. Use cases include: intercept and modify any request made to an application, make outbound requests to any URL on the Internet, and maintain fine-grained control over how users manage traffic that interacts with Internet-facing workloads.

Customers who are running mission-critical Enterprise applications that cannot afford downtime or suboptimal performance should also be aware of these existing CIS Enterprise features:

  • For applications with a global user base, leverage CIS Smart Routing to effectively route requests effectively globally
  • For high-value applications that tend to receive a variety of Layer 7 attack types, which require only the right visitors to gain access, use CIS Custom WAF rules
  • For applications that need constant monitoring and failover, five-second health checks are available

All of these capabilities are available through the CIS Enterprise Plan. To get started with any of these capabilities, please review the plans here.

The IBM and Cloudflare partnership advances both organizations’ missions, and it helps IBM customers accelerate and protect anything connected to the internet.  To find out more about the partnership, please see here.  

Categories: Technology

Give your automated services credentials with Access service tokens

Thu, 07/02/2019 - 17:00
Give your automated services credentials with Access service tokens

Cloudflare Access secures your internal sites by adding authentication. When a request is made to a site behind Access, Cloudflare asks the visitor to login with your identity provider. With service tokens, you can now extend that same level of access control by giving credentials to automated tools, scripts, and bots.

Authenticating users and bots alike

When users attempt to reach a site behind Access, Cloudflare looks for a JSON Web Token (a JWT) to determine if that visitor is allowed to reach that URL. If user does not have a JWT, we redirect them to the identity provider configured for your account. When they login successfully, we generate the JWT.

When you create an Access service token, Cloudflare generates a unique Client ID and Secret scoped to that service. When your bot sends a request with those credentials as headers, we validate them ourselves instead of redirecting to your identity provider. Access creates a JWT for that service and the bot can use that to reach your application.

Getting started

Within the Access tab of the Cloudflare dashboard, you’ll find a new section: Service Tokens. To get started, select “Generate a New Service Token.”

Give your automated services credentials with Access service tokens

You’ll be asked to name the service before Access provides you with a Client ID and Client Secret. The dashboard only displays the Client Secret once, so you’ll need to copy it and keep it in a secure location.

Give your automated services credentials with Access service tokens

Once the service token has been created, you’ll need to update your Access policies to allow requests from approved services. You can add service tokens to existing rules, or you can create new policies for specific endpoints. Access will list the service tokens you created so you can select which services are allowed.

Give the Client ID and Secret to your service with the following header names:


When your service attempts to reach an application behind Access, Cloudflare will look for those headers. If found, we’ll confirm they’re valid and exchange them for a JSON Web Token (JWT), which allows the request to proceed.

The Client ID and Secret pair are valid for one year, at which time you can rotate the tokens. If needed, you can revoke the credentials at any time in the Cloudflare dashboard.

A chatbot with service tokens

Here at Cloudflare, we keep product statistics in an application we secure behind Access. When team members need to query or review data, they login with our identity provider and Access directs them to the tool.

We built a bot to grab reports of product usage and share them directly in chat. However, the bot needed a way to reach the data behind Access without opening up a security hole in the application, so we gave the bot an Access service token.

Each time a team member asks for the latest update on a product statistic, the bot uses its Client ID and Client Secret to login with Cloudflare Access that it also has permission to reach the application. Now that we gave the chatbot service tokens, the data is available to everyone instantly.

What’s next?

You can get started with Access service tokens today by following our guide here. Our chatbot is just one use case. With service tokens, you can leave IP whitelisting behind and authenticate any automated system that needs to reach something behind Access.

Categories: Technology

Better business results from faster web applications - Cloudflare is the fastest

Wed, 06/02/2019 - 16:00
Better business results from faster web applications - Cloudflare is the fastestBetter business results from faster web applications - Cloudflare is the fastest

Web performance encompasses a lot of things: page load time, responsiveness for web and mobile applications. But overall, the key element is response times. How quickly can the origin server or the cache fulfill a user request? How quickly can a DNS response reach the client device?

The Cloudflare mission is to help build a better Internet to everyone, and we offer multiple services for boosting the speed and performance of our customers and users. Cloudflare is faster than the competition when it comes to accelerating performance.

How site speed impacts the bottom line

There is a lot of research out there that confirms what many businesses and web developers already know: speed affects revenue. Better web performance means better user engagement and more conversions. Better performance also results in better SEO, driving up overall traffic numbers and increasing lead generation, sales, or ad revenue.

One study by Google and Bing concluded that on average, a two-second delay in a website's page rendering led to a 4.3% loss in revenue per visitor. Another independent study has shown that 1 additional second of load time reduces conversions by 7%.

How does using Cloudflare affect performance?

According to testing from Cedexis (a company that evaluates CDN performance):

  • Cloudflare is over 50 milliseconds or 23% faster than the nearest competitor over HTTPS for the 95th percentile
  • Cloudflare performs better than all competitors over HTTPS at both the 50th and 95th percentile
  • Cloudflare performs better than all competitors over HTTP at both the 50th and 95th percentile

Better business results from faster web applications - Cloudflare is the fastestHTTPS performance at 95th percentileBetter business results from faster web applications - Cloudflare is the fastestHTTP performance at 95th percentile

Translating domain names into IP addresses quickly and with authority needs an enterprise-ready DNS.  Each webpage requires multiple requests and responses in order to load, so an improvement of even a few milliseconds in how DNS queries are answered adds up quickly.  That's why it's so important to us that our DNS resolvers are the fastest available. According to DNSPerf (a DNS performance benchmarking service):

  • Cloudflare is the fastest authoritative DNS provider, 30% faster globally than the next-fastest competitor
  • Cloudflare is the fastest public resolver, almost 30% faster globally than the next-fastest public DNS resolver

Better business results from faster web applications - Cloudflare is the fastestDNS performance

We've seen this in action; customers have reported large performance gains as a result of implementing Cloudflare. Zendesk, for example, improved their response times by 10x, and OKCupid was able to cut their page load times by up to 50% by implementing Cloudflare.

How Cloudflare technology boosts business results

Cloudflare has a wide range of customers, from small businesses to large enterprises, but across all of them there's a similar story: Users engage more, convert more, and bounce less when content renders quickly. For instance, improving page load speed resulted in 62% more conversions for U.S. Xpress. Bidu grew leads by 30% year over year by reducing average full page load times from 13 seconds to 2.3 seconds.

Our innovative products and services help speed up applications and websites:

  • Argo: Cloudflare Argo uses smart routing to find the fastest, least congested path across the Cloudflare network.
  • CDN: The Cloudflare globally distributed CDN caches web content across 165+ global data centers to supercharge web performance.
  • DNS: Cloudflare DNS services shave crucial milliseconds off requests for a DNS lookup.
  • Load Balancing: Cloudflare balances traffic across multiple servers or regions and uses health checks to identify offline servers.
  • Web Optimizations: Cloudflare optimizes web content delivery by bundling JavaScript files, leveraging local storage, adjusting cache headers automatically, and more.

Improving page speed and performance results in higher conversion rates, better user engagement. Cloudflare provides the fastest performance for web applications.

Sign up here or speak to one of our experts to get started.

Categories: Technology

Helping To Build Cloudflare, Part 6: What does Cloudflare's CTO do?

Wed, 06/02/2019 - 08:26

This is the final part of a six part series based on a talk I gave in Trento, Italy. To start from the beginning go here.

If you are still awake there’s really one final question that you might want to know the answer to: What does the CTO do? The reality is that it means different things in different companies. But I can tell you a little about what I do.

The longest temporary job

I didn’t join Cloudflare as CTO. My original job title was Programmer and for the first couple of years I did just that. I wrote a piece of technology called Railgun (a differential compression program used to speed up the connection between Cloudflare and origin web servers) and then I went on to write our WAF. After that I worked on our Go-based DNS server and other parts of the stack.

At some point Lee Holloway decided he didn’t want to manage Cloudflare’s growing staff and Michelle Zatlyn (one of Cloudflare’s founders) asked me if I would ‘temporarily’ manage engineering. This is now the longest temporary job I’ve ever had!

Initially a lot of what I did was manage the team and help interview people. But I was still writing code. But more and more what I did was encourage others to do stuff. One day a bright engineer I’d been working with on DNS told me that he thought he could ‘solve DDoS’ if he could be left alone to work on an idea he had.

This was one of those situations where the engineer had shown they were very capable, and it was worth taking a risk. So, I said “OK” go do that, I’ll write the code you were meant to write, assign all your bugs to me. That turned out to be a good decision because he built our entire DDoS mitigation system (known internally as gatebot) which has fended off some of the largest DDoS attacks out there.

Of course, like everything else Cloudflare does, things outgrow an individual and need a team. Today gatebot and DDoS in general is managed by a team of engineers in London and Austin and the original engineer has moved onto other things. So, encouraging people is an important part of the job.

Slowly my temporary job got more and more things added to it. I was running Cloudflare’s IT department, SRE and technical operations, the network, infosec and engineering. Some temporary job. Slowly I got rid of some of those things. IT is now its own department as is infosec. Those things are far better run by other people than me!

The challenge of managing a team split around the entire globe (I had staff in San Francisco, London and Singapore) meant that new leadership was needed and so I recruited a head of engineering and SRE/ops had its own leader. Today more than 250 people sit in my overall team.

Along the way I stopped writing code and I did less and less day to day management as the leaders were able to do that. But something else became more important: things like this talk and sales.

It's not enough to build, you have to sell

Robert Metcalfe, who invented Ethernet while at Xerox PARC, said “I didn’t get rich by inventing Ethernet, I got rich by selling it”. This is an important point. It’s not enough to have good technology, you have to get people to hear about it and you have to sell it.

One way Cloudflare markets is through our blog. You may not realize it, but we have a very, very strong brand because we write those super technical blog posts. They don’t look like marketing, but they are. And another way we market is by doing this sort of thing: going places and talking.

But frequently, what I do is talk directly to customers. On Monday afternoon and evening, I was on two long video conferences with big potential customers in the US.  Yesterday, I was on a call about our partnership with IBM. This morning I did a call with a potential client in Germany before flying to Verona. So… one thing the CTO does is a lot of sales!


One thing I am not is the source of all technical wisdom in the company. I was once introduced by a law school friend of Matthew Prince’s as “the brain behind Cloudflare”. That’s so far from the truth. There are many jobs in engineering at Cloudflare that I am incapable of doing today without an enormous amount of learning. And teams are much stronger than individuals.

I do, on occasion, use experience to push the company in a certain direction. Or simply encourage something that I think is the right technology (I did this with our adoption of Clickhouse as a column-oriented database, with Go and recently with Rust). With Rust I decided to learn the language myself and make a little project and put it on GitHub. That’s enough in my position to make people realize it’s OK to use Rust.


So, in concluding here are some things to learn from my experience and the creation of Cloudflare. Be audacious, share widely, be open, work hard, spend a lot of time finding the right people and helping them, create teams, rewrite code, panic early! And above all, while doing this remain humble. Life comes at you fast, problems will arise, the wheel of karma spins around, you’ll need the help of others. Build something great and be humble about your creation.

Helping to Build Cloudflare
Categories: Technology

Cloudflare Support for Azure Customers

Tue, 05/02/2019 - 16:00
Cloudflare Support for Azure Customers

Cloudflare seeks to help its end customers use whichever public and private clouds best suit their needs.  Towards that goal, we have been working to make sure our solutions work well with various public cloud providers including Microsoft’s Azure platform.

Cloudflare Support for Azure Customers

If you are an Azure customer, or thinking about becoming one, here are three ways we have made Cloudflare’s performance and security services work well with Azure.

1) The development of an Azure application for Cloudflare Argo Tunnel.

We are proud to announce an application for Cloudflare Argo Tunnel within the Azure marketplace. As a quick reminder, Argo Tunnel establishes an encrypted connection between the origin and the Cloudflare edge. The small tunnel daemon establishes outbound connections to the two nearest Cloudflare PoPs,  and the origin is only accessible via the tunnel between Cloudflare and origin.

Because these are outbound connections, there is likely no need to modify firewall rules, configure DNS records, etc.  You can even go so far as to block all IPs on the origin and allow traffic only to flow through the tunnel. You can learn more here. The only prerequisite for using Argo Tunnel is to have Argo enabled on your Cloudflare zone. You can sign up a new domain here.

You can find instructions on how to configure Argo Tunnel through the Azure interface here.

2) Azure is promoting a serverless solution for its Static Web Hosting service, and Cloudflare wants to help you secure it!

Cloudflare makes SSL issuance and renewal remarkably easy, and it is included in every plan type. There are a few extra steps to getting this to work on Azure’s serverless platform, so we’ve created this guide for you to get started.

3) Use of Cloudflare’s speedy DNS resolver,, with Azure

Cloudflare has created a free DNS resolver to improve DNS response times. We blogged about this last year.  A few important takeaways are: this resolver runs in all our data centers globally and thus is highly performant, is future proofed for emerging DNS protocols that enhance security (DNS over TLS/HTTPs), and minimizes DNS query information shared with intermediate resolvers.

Cloudflare does daily monitoring of this resolver to make sure it consistently performs well on the Azure platform.  Here are a few steps to take to make use of if you are an Azure user.

Stay tuned on further Cloudflare support for Azure.

Categories: Technology

Helping To Build Cloudflare, Part 5: People: Finding, Nurturing and Learning to Let Go

Tue, 05/02/2019 - 08:41

This is part 5 of a six part series based on a talk I gave in Trento, Italy. To start from the beginning go here.

So, let me talk a bit about people. Software is made by people. Sometimes individuals but more likely by teams. I’ve talked earlier about some aspects of our architecture and our frequent rewrites but it’s people that make all that work.

And, honestly, people can be an utter joy and a total pain. Finding, keeping, nurturing people and teams is the single most important thing you can do in a company. No doubt.

Finding People

Finding people is really hard. Firstly, the technology industry is booming, and so engineers have a lot of choices. Countries create special visas just for them. Politicians line up to create mini-Silicon Valleys in their countries. Life is good!

But the really hard thing is interviewing. How do you find good people from an interview? I don’t know the answer to that. We put people through on average 8 interviews and a pair programming exercise. We look at open source contributions. Sometimes we look at people’s degrees.

We tend to look for potential. An old boss used to say, “Don’t hire people who’ve already done the job, hires those who can learn to do it”. It’s an interesting idea. People naturally want to hire people who know how to do something. But technology changes all the time, so what you are really looking for are people who are curious.

And you won’t find curiosity by looking at degrees and qualifications. You’ll find it by asking about what people do and think. What they enjoy and what they’ve done when no one was looking.

Another thing that’s really important is to ask, “Can this person express themselves?”. It’s rare that it’s OK to have someone who can’t communicate with others. Sure, you may come across that one genius who you want to hire who only speaks in grunts. But real magic happens when teams (especially small teams of 3 to 12 people) make software together. And teams are built on communication. So, look for people who can express what they are thinking: might be through email, or drawing, or speaking.

Letting People Go

You’re also going to find that you’ve hired the wrong people or built the wrong teams. Don’t be afraid to move people around. Last year 16% of people moved to a different job inside Cloudflare (not just teams!). You should constantly be looking at your teams and asking how well they are performing.

It’s not a failure to change a team, or reorganize, or move people about. In fact, it’s a failure as a manager to NOT do that.

Don’t be afraid to let people go.

It’s sad but you’ll think someone is great when you interview them and then they turn out not to be. Or someone gets too big for their boots and starts behaving like they own the place. Sometimes people need to leave the company. This is by far the worst thing a manager has to do (to this day I hate letting people go).

Over the last few years I’ve been in the position of having to decide whether to remove people from senior management positions in engineering. Making those decisions is really hard. You might enjoy working with someone but realize that their team isn’t doing so well, or they don’t seem to be achieving what you expect from them.

I know from my own experience I’ve always taken too long to make changes. I always want to give people a second or third chance. And usually it’s been a mistake. Actually, not usually, always. It’s unbelievably tough to say to someone “I don’t believe that you are the right person to be X and so I’ve decided to replace you”. But if you do that be 100% clear. It’s fairer to the person being moved on that they know that a concrete decision has been made.

I think one of the most important things I say to people who work for me is: “You need to tell me if the job you are doing isn’t making you happy”. Because I may not realize. I’m only human after all. One of my engineers took me up on that one day and I’m glad he did. This was someone who reported directly to me: a staff engineer with a ton of experience.

One day he came to me and said, “I don’t want to work for you any more, I want to work for X on Y”. Perhaps he was nervous to say that to me but putting people in jobs they enjoy is key. A manager isn’t successful because they grow a big team, they are successful when their team builds awesome software and awesome software gets built by people who feel they are doing their best work. That should be your goal: help people do their best work. Help people grow and learn.

Diversity and Inclusion

There’s a lot of discussion in the software industry about diversity and inclusion. Many years ago, I had a small team of engineers in one of my first management jobs. There were five of us: Alice, Tanvi, Roman, Dan and me. Two women, three men. It was one of the most fun teams I’ve ever worked on because of that mixture of people and backgrounds. We built a really nice piece of software.

Lots of research shows that diverse teams are stronger, happier and do better work. You’re really losing out if you don’t have a diverse team. This is an area Cloudflare is working very hard on (and especially in the engineering team). Not because it’s trendy or cool, but because it means we’ll be a better, stronger, smarter company.

To do so we’ve looked at the language we use in job descriptions, the way we interview people, and how we source potential candidates. It also meant reviewing our benefits and internal policies to make sure that the company is attractive to all sorts of people. It’s working and I expect that by the end of 2019 we’ll be able to talk about all that we did.

Bottom line: there are great people out there from all sorts of backgrounds. Go find ‘em!

Helping to Build Cloudflare
Categories: Technology

The Black Elephant in the Room

Mon, 04/02/2019 - 18:00
The Black Elephant in the Room

When I come to work at Cloudflare, I understand and believe in this main purpose of why we exist: Helping to Build a Better Internet.

The reason why we feel like we can help build a better internet is simply because we believe in values that instill a nature of freedom, privacy, and empowerment in the tool that helps individuals broaden their intellectual and cultural perspective on the daily.

Knowing all of this, our own great company needs to be able to build itself daily into a better company. And that starts with having those conversations which are always uncomfortable. And let me be clear in saying this, being uncomfortable is a good thing because that makes one grow and not be stagnant. Saying all that, here we go...

The Afrocultural community at Cloudflare should take pride in being diverse and inclusive for all just as we all work together to help build a better internet for all.

And one of the many ways we can build upon this effort is to do more than just belong in a work place and eventually build off of that, feeling normal over time. When I mean belong, it’s more than the "Impostor Syndrome" that normally hits every new hire at any great company. The "Impostor Syndrome" phenomena can be explained by the fact that even though someone may have all the credentials that make them seem like they fit in that particular space, a human being can feel like they don't belong there because of self-doubt or nervous, initial insecurity. This notion eventually goes away over time because this person proves to not only to his/her team that they belong in that space but also to themselves.

That’s the problem, however. That feeling doesn’t seem like it goes away for cultural groups, especially that of the Afrocultural community.

That's the Black Elephant in the Room and it's about time we talk about this.

Our community came together because we needed each other. We wanted to congratulate each other when one of us surpassed a goal at the end of a quarter. We wanted to have dialogue with not only our team but with other communities in Cloudflare, to empower, encourage, and remind each other every now and then that we are apart of what makes working at Cloudflare so great. From that moment on we knew that we had a sense of community and diversity. Cloudflare is a great place to work, but we knew that we need each other to make this an unforgettable experience. From that first meeting, we knew something special was born, and that is Afroflare.

The Black Elephant in the Room

And so we're able to talk about the issues that matter to us: diversity in the workplace, Afrocultural pride, a new and fresh view of the Black culture at work, or even just saying, "Hey, you're dope." More importantly though, we're done talking among each other. No. We now need to have the talk with our other brethren on this little blue ball in our Solar System called Earth. How can other Afro American employees get to feel welcomed into the tech world? What do young African American men and women need to strengthen their resumes and also empower themselves to be better and smarter individuals? In what ways can Cloudflare help lead this charge?

After all....we're just discussing the Black Elephant in the room.

Categories: Technology

Helping To Build Cloudflare, Part 4: Public Engagement

Mon, 04/02/2019 - 08:41

This is part 4 of a six part series based on a talk I gave in Trento, Italy. To start from the beginning go here.

We don’t believe that any of our software, not a single line of code, provides us with a long-term advantage. We could, today, open source every single line of code at Cloudflare and we don’t believe we’d be hurt by it.

How we think about Open Source

Why don’t we? We actually do open source a lot of code, but we try to be thoughtful about it. Firstly, a lot of our code is so Cloudflare-specific, full of logic about how our service works, that it’s not generic enough for someone else to pick up and use for their service. So, for example, open sourcing the code that runs our web front end would be largely useless.‌‌

But other bits of software are generic. There’s currently a debate going on internally about a piece of software called Quicksilver. I mentioned before that Cloudflare used a distributed key-value store to send configuration to machines across the world. We used to use an open source project called Kyoto Tycoon. It was pretty cool.‌‌

But it ended up not scaling to our size. It was great when we had a small number of locations worldwide, but we ran into operational problems with 100s of locations. And it wasn’t, by default, secure and so we had to add security to it. Once we did, we open sourced that change, but at some point when using open source software you have to make a “modify or rewrite” decision.‌‌

We’d done that in the past with PowerDNS. Originally our DNS service was based on PowerDNS. And it was great. But as we scaled, we ran into problems fitting it into our system. Not because there’s something wrong with PowerDNS but because we have a lot of DNS-related logic and we were shoehorning things into PowerDNS, and it was getting less and less maintainable for us. This was not PowerDNS' fault; we'd built such a large edifice of business logic around it that PowerDNS was being crushed by the shear weight of that logic: it made sense to start over and integrate logic and DNS into a single code base.‌‌

Eventually we wrote our own server, RRDNS, in Go, that is now the code behind the largest and fastest authoritative DNS service on the planet. That’s another piece of software we haven’t open sourced. That one because it’s riddled with business logic and handling of special conditions (like the unique challenges of working inside China).‌‌

But back to Quicksilver. It’s based on LMDB and does all data and code sync. across our global network. Typically, a change (you click a button in our UI or you upload code for our edge compute product) and it’s distributed globally in 5s. That’s cool.‌‌

And Quicksilver is generic. It doesn’t contain lots of Cloudflare-specific logic and it’s likely useful for others. The internal debate is about whether we have time to nurture and handle the community that would grow up around Quicksilver. You may recently have seen the creator of Ruby saying on Twitter “We are mere mortals” pointing out that the people behind popular open source projects only have so much time. And we take a lesson from the creators of Kyoto Tycoon who have now largely abandoned it to do other things.‌‌

Perhaps Quicksilver will get open sourced this year, we’ll see. But our rule for open sourcing is: “Is this something others can use and is this something we have time to maintain in public?”. Of course, where we modify existing open source software, we upstream everything we can. Inevitably, some projects don’t accept our PRs and so we have to maintain internal forks.‌‌

How we think about Patents

While we’re on the topic of intellectual property let’s talk about patents. Cloudflare has a lot of patents. Although it might be nice to live in a world where there were no software patents it’s a little like nuclear weapons. It’s very hard for one country to disarm unilaterally because a power imbalance is left behind. If Cloudflare didn’t patent aspects of our software, then others would and would then use them against us.‌‌

So, we patent for defensive reasons. To stop others from using the patent system against us.‌‌

Working With Governments

And speaking of patents let’s talk about governments. Lots of technology companies think they are too cool for school. They don’t need to think about governments because technology moves faster than them and what do those old, boring lawmakers know about anything anyway?‌‌

Wrong. Dead wrong.‌‌

Yes, governments move slowly. You actually want them to. Imagine if governments changed policies as fast as chat apps get launched. It would be a nightmare. But just because they are slow, they can’t be ignored.‌‌

Put simply governments have tanks and you don’t. Eventually lawmakers will make laws that affect you and unless you’ve spent time explaining to them what it is you do you may have a nasty surprise. ‌‌

Cloudflare decided very early on to engage with lawmakers in the US and Europe. We did this by helping them understand what is happening in the Internet, what challenges we foresee, and helping them with the technical arcana that we all deal with.‌‌

If there’s any chance that your business ends up being regulated by a government then you should engage early. Cloudflare thinks a lot about things like copyright law, the fight against online extremism, and privacy. We have to because our network is used by 13 million web sites and services and all manner of things pass through it.‌‌

Lots of times people get mad at us because they don’t like a particular customer on our network. This is tough for us because oftentimes we don’t like them either. But here’s the tricky thing: do you really want me, or Matthew, deciding what’s online? Because many times that’s what angry mobs are asking.‌‌

“Shut this down”, “Throw this off your service”. It’s odd that people are asking that corporations be gatekeepers when corporations answer to shareholders and not the people. The right answer is that if you see something you don’t like online: engage in the political process in your country.‌‌

The transparency of democratic institutions and, in particular, the judiciary is vital to the long-term survival of countries. It’s through those institutions that people need to express their desire for what’s allowed and not allowed. ‌‌

How do you engage with governments? Every single government has committees and advisory bodies that are dying to have people from industry help out. Go find the bodies that are doing work that overlaps with your company, don’t be put off by how old-fashioned they seem, and get involved.

More tomorrow...

Helping to Build Cloudflare
Categories: Technology

Helping To Build Cloudflare, Part 2: The Most Difficult Two Weeks

Sat, 02/02/2019 - 10:00

This is part 2 of a six part series based on a talk I gave in Trento, Italy. Part 1 is here.

It’s always best to speak plainly and honestly about the situation you are in. Or as Matthew Prince likes to put it “Panic Early”. Long ago I started a company in Silicon Valley which had the most beautiful code. We could have taught a computer science course from the code base. But we had hardly any customers and we failed to “Panic Early” and not face up to the fact that our market was too small.

Ironically, the CEO of that company used to tell people “Get bad news out fast”. This is a good maxim to live by, if you have bad news then deliver it quickly and clearly. If you don’t the bad news won’t go away, and the situation will likely get worse.


Cloudflare had a very, very serious security problem back in 2017. This problem became known as Cloudbleed. We had, without knowing it, been leaking memory from inside our machines into responses returned to web browsers. And because our machines are shared across millions of web sites, that meant that HTTP requests containing potentially very sensitive information could have been leaked.

Worse this information was being cached by search engines. So, anyone could go to Google or Bing or Baidu and look for sensitive information just by knowing a few keywords. Luckily, for us, Google’s Project Zero discovered that we were leaking by looking at Google’s crawler cache. They informed us and we were quickly able to stop the leak.

But that didn’t diminish the fact that private information (much of which would have been transmitted encrypted) had been cached by search engines. Although we stopped the leak within 45 minutes the cleanup task was massive. It was massive firstly because we had to find what had been leaked and secondly because we had to find all the search engines with caches and somehow ask them to delete cached data.

None of the search engines had handled something like this before. We were asking for mass deletion of data and it took a long time (at least it felt like a long time) to get to the right people and start to get cached data deleted.

From the very first night of Cloudbleed I started collecting information to be able to write the public disclosure. Ultimately, when Project Zero wanted to go public, we were ready with a long, transparent blog post on the subject and were able to talk about it.

It was, by far, the most difficult week of my career. Firstly, we had the bug itself, secondly, we had the cleanup, and then we had to tell people what had happened. Throughout that week I barely slept (and I am not exaggerating) and a large team of people across Cloudflare in the US, UK and elsewhere kept in contact constantly. We learnt that it’s possible to keep a Google Hangout between two offices running, literally, for days without interruption.

Known Unknowns

The hardest thing was that we seriously did not know, at the beginning, whether Cloudflare would survive. Right at the start it looked terrible, it was terrible, and we had two questions: “What private data has actually been leaked and cached?” and “Did anyone find this and actively exploit it?”.

We answered both by extensive searching and collating of information from search engines. Ultimately, myself and others called customers and spoke to them on the phone. We were able to tell them what we’d found and statistically what was likely to have leaked.

The second question was answered by looking for evidence of exploitation in our logging systems. But there was something very tricky: Cloudflare had long limited the amount of data it logs for privacy reasons. So, we had to dig into statistical analysis of all sorts of data (crash rates, saved core dumps, errors in Sentry, sampled data) to look for exploitation.

We split into separate teams to look for different evidence and only myself and Matthew Prince knew what each team was seeing. We did that because we didn’t want one team to influence another. We wanted to be sure that we were right before publishing our second blog with more detailed information.

We didn’t find evidence of exploitation. And while serious, the data cached in search engines was found to contain little really private information. But it was very, very serious and we all knew that this could have been worse.

Although I look back at those two weeks as the worst of my career, to quote Charles Dickens: “It was the best of times, it was the worst of times”. Most of the company didn’t know Cloudbleed had happened until we went public. The morning it became public I showered very early and took a cab to the office.

Normally, the office is quite quiet in the morning and I was stunned to walk into an office full of people. People who asked me “What can we do?”. It was an incredible feeling. We printed a large poster of Winston Churchill staring down at the team saying, “If you’re going through Hell, keep going!”. Everyone pitched in.

In the middle of it someone from the press, the BBC I think, asked me if I’d changed any passwords because of Cloudbleed. I said I had not. And that was true. I didn’t change anything personally. But in the middle of that firestorm I took a lot of criticism from armchair critics for that.

Although terrible, Cloudbleed reinforced the culture of Cloudflare: openness and helping others. We were all in together and we got through it. And our customers saw that: we didn’t lose major customers, in fact, we gained customers who told us “We want to work with you because you were so open”.

Helping to Build Cloudflare
Categories: Technology

Helping To Build Cloudflare, Part 3: Audacity, Diversity and Change

Fri, 01/02/2019 - 17:24

This is part 3 of a six part series based on a talk I gave in Trento, Italy. To start from the beginning go here.

After Cloudbleed, lots of things changed. We started to move away from memory-unsafe languages like C and C++ (there’s a lot more Go and Rust now). And every SIGABRT or crash on any machine results in an email to me and a message to the team responsible. And I don’t let the team leave those problems to fester.


So Cloudbleed was a terrible time. Let’s talk about a great time. The launch of our public DNS resolver That launch is a story of an important Cloudflare quality: audacity. Google had launched years ago and had taken the market for a public DNS resolver by storm. Their address is easy to remember, their service is very fast.‌‌

But we thought we could do better. We thought we could be faster, and we thought we could be more memorable. Matthew asked us to get the address and launch a secure, privacy-preserving, public DNS resolver in a couple of months. Oh, and make it faster than everybody else.‌‌

We did that. In part we did it because of good relationships we’ve established with different groups around the world. We’ve done that by being consistent about how we operate and by employing people with established relationships. This is partly a story about how diversity matters. If we’d been the sort of people who discriminated against older engineers a lot of Cloudflare would not have been built. I’ll return to the topic of diversity and inclusion later.‌‌

Through relationships and sharing we were able to get the address. Through our architecture we were able to be the fastest. Over years and years, we’ve been saying that Cloudflare was for everyone on the Internet. Everyone, everywhere. And we put our money where our mouths are and built 165 data centers across the world. Our goal is to be within 10ms of everyone who uses the Internet.‌‌

And when you’re everywhere it’s easy to be the fastest. Or at least it’s easy if you have an architecture that makes it possible to update software quickly and run it everywhere. Cloudflare runs a single stack of software on every machine world-wide. That architecture has made a huge difference versus our competitors and has allowed us to scale quickly and cheaply.‌‌

Cloudflare's Architecture

It was largely put in place before I joined the company. Lee Holloway (the original architect of the company), working with a small team, built a service based on open source components (such as Postgres and NGINX) that had a single stack of software doing caching, WAF, DDoS mitigation and more.‌‌

It was all bound together by a distributed key-value store to send configuration to every machine we have around the world in seconds. And centrally there was a large customer database in Postgres and a lot of PHP to create the public web site.‌‌

Although we have constantly changed our software this architecture still exists. Early at Cloudflare I argued that there should be some special machines in the network doing special tasks (like DDoS mitigation). The truth is I wanted to build those machines because technically it would have been really exciting to work on that sort of large, complex low latency software. But Lee and Matthew told me I was wrong: a simple architecture could scale more easily.‌‌

And they were right. We’ve scaled to 25Tbps of network capacity with every machine doing every single thing. So, get the architecture right and make sure you’re building things for the right reasons.  Once you can scale like that, adding was easy. We rolled out the software to every machine, tested it and made it public. Overnight it was the fastest public DNS resolver there is and remains so.‌‌

Naturally, our software stack has evolved a lot since Lee started working on it. And most parts of it have been rewritten. We’ve thrown away all the code that Matthew Prince wrote in PHP from the earliest days, we’ve started to throw away code that I wrote in Lua and Go. This is natural and if you’re looking back at code you wrote five years ago and you’re feeling that it’s still fit for purpose then you are either fooling yourself or not growing.‌‌

The Price of Growth is Rewrites

It seems that about every order of magnitude change in use of software requires a rewrite. It’s sad that you can’t start with the ultimate code base and ultimate architecture but the reality is that it’s too hard to build the software you need for today’s challenges and so you can’t worry about tomorrow. It’s also very hard to anticipate what you’ll actually need when your service grows by 10x.‌‌

When I joined most of our customers had a small number of DNS records. And the software had been built to scale to thousands or millions of customers. Each with a small number of records. That’s because our typical customer was a small business or individual with a blog. We were built for millions of them.‌‌

Then along came a company that had a single domain name with millions of subdomains. Our software immediately fell over. It just wasn’t built to cope with that particular shape of customer.‌‌

So, we had to build an immediate band aid and start re-architecting the piece of software that handled DNS records. I could tell you 10 other stories like that. But the lesson is clear: you don’t know what to expect up front so keep going until you get there. But be ready to change quickly.

Helping to Build Cloudflare
Categories: Technology

Helping To Build Cloudflare, Part 1: How I came to work here

Fri, 01/02/2019 - 13:49

This is the text I prepared for a talk at Speck&Tech in Trento, Italy. I thought it might make a good blog post. Because it is 6,000 words I've split it into six separate posts.

Here's part 1:

I’ve worked at Cloudflare for more than seven years. Cloudflare itself is more than eight years old. So, I’ve been there since it was a very small company. About twenty people in fact. All of those people (except one, me) worked from an office in San Francisco. I was the lone member of the London office.

Today there are 900 people working at Cloudflare spread across offices in San Francisco, Austin, Champaign IL, New York, London, Munich, Singapore and Beijing. In London, my “one-person office” (which was my spare bedroom) is now almost 200 people and in a month, we’ll move into new space opposite Big Ben.

The original Cloudflare London "office"

The numbers tell a story about enormous growth. But it’s growth that’s been very carefully managed. We could have grown much faster (in terms of people); we’ve certainly raised enough money to do so.

I ended up at Cloudflare because I gave a really good talk at a conference. Well, it’s a little more complex than that but that’s where it all started for me without me knowing it. Fifteen years ago, a guy called Paul Graham had started a conference at MIT in the US. At the time Paul Graham was known for being an expert LISP programmer and for having an idea about how to deal with email spam. It wasn’t until a year later than he started Y Combinator.

Paul invited me to give a talk at this MIT Spam Conference about an open source machine learning email filter program I had written. So, I guess the second reason I ended up at Cloudflare is that I wrote some code and open sourced it. That program is called POPFile and you can still download it today (if you’d like your email sorted intelligently).

I wrote POPFile because I had an itch to scratch. I was working at a startup in Silicon Valley and I was receiving too much email. I used Microsoft Outlook and I wanted my mail sorted into different categories and so I researched techniques for doing that and wrote my own program. The first version was in Visual Basic, the second in Perl.

So, I got to Cloudflare because of a personal itch, open source, public speaking and two languages that many people look down on and joke about. Be wary of doing that. Although languages do make a difference the skill of a programmer in their chosen language matters a lot.


If there’s a lesson in there it’s… share with others. Share through open source, through giving talks, and through how you interact with others. The more you give the more people will appreciate you and the more opportunity you will have. There’s a great book about this called Give and Take by Adam Grant. We gave everyone at Cloudflare a copy of that book.

One of the people who saw me speak at MIT was Matthew Prince, Cloudflare’s CEO. Matthew was speaking also. He saw me speak and thought I was interesting, and I saw him speak and thought the same thing.

Over a period of years Matthew and I stayed in contact and when he, Michelle and Lee started Cloudflare he asked me to join. It was the wrong time for me and, to be honest, I had a lot of doubts at the time about Cloudflare. I didn’t think many people would sign up for the service.

I’m glad I was wrong. And I am glad that Matthew was persistent in trying to get me to join. Today there are over 13 million domains registered to Cloudflare and I have ended up as CTO. But I wasn’t hired as CTO and it wasn’t my ambition. I joined Cloudflare to work with people I liked and to do cool stuff.

I’m very lucky that my background, upbringing, parents and career have enabled me to work with people I like and do cool stuff. The cool stuff changes of course. But that’s technology for you.

It's Terrible

When I was first at Cloudflare I went to quite a few meetings with Matthew. Especially meetings with investors and people would always ask him in a jovial manner “How’s it going?” and he would always answer “It’s terrible”. At first, I thought he was just being silly and was playing for a laugh to see how people would react.

In part, he was doing that but there’s also a lot of truth in the fact that startups are “terrible”. They are very, very hard. It’s very easy to get distracted by the huge successes of a small number of companies and not face the reality that building a company is hard work. And hard work isn’t enough. You might not have enough money, or the right people, or you might discover that your market is too small.

Silicon Valley lives in a schizophrenic state: everyone outwardly will tell you how they are “killing it” and doing so well. But inside they are full of fear and doubt. Mentally that’s a very hard thing to sustain and it’s not surprising that some people suffer mental health problems because of it. We shouldn’t be ashamed of admitting that things are hard, as Matthew did.

Silicon Valley also likes to use very positive language for things that might be a little negative or tough. One such term is “pivot”. There’s nothing wrong with changing direction or responding to customer or market demands. But face it with reality that you had to change direction. That’s OK. To quote George Bernard Shaw: “Progress is impossible without change, and those who cannot change their minds cannot change anything”.

Part 2 will be published tomorrow.

Categories: Technology


Additional Terms