Blogroll: CloudFlare

I read blogs, as well as write one. The 'blogroll' on this site reproduces some posts from some of the people I enjoy reading. There are currently 33 posts from the blog 'CloudFlare.'

Disclaimer: Reproducing an article here need not necessarily imply agreement or endorsement!

Subscribe to CloudFlare feed CloudFlare
Cloudflare is on a mission to help build a better Internet.
Updated: 1 hour 20 min ago

Creating a single pane of glass for your multi-cloud Kubernetes workloads with Cloudflare

Fri, 23/02/2018 - 17:00
Creating a single pane of glass for your multi-cloud Kubernetes workloads with Cloudflare

(This is a crosspost of a blog post originally published on Google Cloud blog)

One of the great things about container technology is that it delivers the same experience and functionality across different platforms. This frees you as a developer from having to rewrite or update your application to deploy it on a new cloud provider—or lets you run it across multiple cloud providers. With a containerized application running on multiple clouds, you can avoid lock-in, run your application on the cloud for which it’s best suited, and lower your overall costs.

If you’re using Kubernetes, you probably manage traffic to clusters and services across multiple nodes using internal load-balancing services, which is the most common and practical approach. But if you’re running an application on multiple clouds, it can be hard to distribute traffic intelligently among them. In this blog post, we show you how to use Cloudflare Load Balancer in conjunction with Kubernetes so you can start to achieve the benefits of a multi-cloud configuration.

To continue reading follow the Google Cloud blog here or if you are ready to get started we created a guide on how to deploy an application using Kubernetes on GCP and AWS along with our Cloudflare Load Balancer.

Creating a single pane of glass for your multi-cloud Kubernetes workloads with Cloudflare

Categories: Technology

Kathmandu, Nepal is data center 123

Fri, 23/02/2018 - 01:13
Kathmandu, Nepal is data center 123

Kathmandu, Nepal is data center 123

We said that we would head to the mountains for Cloudflare’s 123rd data center, and mountains feature prominently as we talk about Kathmandu, Nepal, home of our newest deployment and our 42nd data center in Asia!

Five and three quarter key facts to get started:

  • Nepal is home to the highest mountain in the world.
  • Kathmandu has more UNESCO heritage sites in its immediate area than any other capital!
  • The Nepalese flag isn’t a rectangle. It’s not even close!
  • Nepal has never been conquered or ruled by another country.
  • Kathmandu, Nepal is where Cloudflare has placed its 123rd data center.
  • Nepal’s timezone is 5 hours 45 minutes ahead of GMT.

The mountainous nation of Nepal is home to Mount Everest, the highest mountain in the world, known in Nepali as Sagarmāthā. Most of us learn that at school; however there’s plenty of other mountains located in Nepal. Here’s the ones above 8,000 meters (extracted from the full list) to get you started:

  • Mount Everest at 8,848 meters
  • Kanchenjunga at 8,586 meters
  • Lhotse at 8,516 meters
  • Makalu at 8,463 meters
  • Cho Oyu at 8,201 meters
  • Dhaulagiri I at 8,167 meters
  • Manaslu at 8,156 meters
  • Annapurna I at 8,091 meters

Kathmandu, Nepal is data center 123Photo of Ammapurna taken outside Pokhara by blog author

As we said, Kathmandu and Nepal are very mountainous region! Some of these mountains are shared with neighboring countries. In-fact, the whole Himalayan range stretches much further than just Nepal as it also encompasses neighboring countries such as Bhutan, China, India, and Pakistan.

Nepal’s flag

The official flag of Nepal is not a rectangle, it’s a very unique shape. No other flag comes close to having this shape. At first viewing it looks simple, just two triangles (representing the Himalayas) placed vertically against the flagpole. Nope - the triangles have a very specific dimension.

Kathmandu, Nepal is data center 123

The flags symbolism goes beyond the mountains. The two white symbols represent the calming moon and fierce sun or sometimes the cool weather in the Himalayas and warm lowlands.

But back to those two triangles. Ignoring the old adage ("It was my understanding that there would be no math."), let’s grab what Wikipedia says about the shape of this flag and see if we can follow along.

First off, let’s explain irrational vs rational numbers (or ratios or fractions). A rational number is a simple P/Q number like 1/2 or 3/4 or 1/5 or even 16/9 etc etc. The numerator and denominator must both be integers. Even 100003/999983 (using prime numbers) is a rational number. If the denominator is 0 then the number isn’t rational.

An irrational number is everything else. If it can’t be written as P/Q, then it’s irrational. This means a number that doesn’t terminate or doesn’t become periodic. For example, π or Pi (3.141592653589793238462…), e or Euler's Number (2.718281828459045235360…), the square root of 2 (1.414213562373095…), etc. These are all irrational numbers. Don’t be fooled by 4/3. While it’s impossible to full write out (1.33333… continuing forever), yet it’s actually a rational number.

(Read up about Hippasus, who’s credited with discovering the existence of irrational numbers, if you want an irrational murder story!)

That’s enough math theory; now back to the Nepali flag. Each red triangle has a rational ratio. The flag is started with simple 1:1 and 3:4 ratios and that’s the easy part. We are all capable of grabbing paper or cloth and make a rectangle that’s 3 inches by 4 inches, or 1.5 meters by 2 meters or any 3:4 ratio. It’s simple math. What gets complicated is adding the blue border. For that, we need to read-up what was said by the On-Line Encyclopedia of Integer Sequences (OEIS Foundation) and Wikipedia. They both go into great depth to describe the full mathematical dimensions of the flag. Let’s just paraphrase them slightly:

Kathmandu, Nepal is data center 123

However the math (and geometry) award goes to the work done in order to produce "Calculation of the aspect ratio of the national flag of Nepal". The final geometric drawing is this:

Kathmandu, Nepal is data center 123
Berechnung des Seitenverhältnisses der Nationalfahne von Nepal

Yeah - that’s going a bit too far for a Cloudflare blog! Let’s just say that Nepal’s flag is unique and quite interesting.


We are especially excited to announce our Kathmandu data center while attending APRICOT conference, being held in Nepal this year. The event, supported by APNIC, the local Regional Internet address Registry (RIR) for the Asia-Pacific region, attracts leaders from Internet industry technical, operational, and policy-making communities. Cloudflare's favorite part of APRICOT is the Peering Forum track on Monday.


Nepal is just one of eight countries making up the SAARC (South Asian Association for Regional Cooperation) organization. Headquartered in Kathmandu, it comprises Afghanistan, Bangladesh, Bhutan, India, Nepal, the Maldives, Pakistan and Sri Lanka.

Cloudflare has already deployed into neighboring India, and any astute reader of these blogs will know we are always working on adding sites where and when we can. The SAARC countries are in our focus.

Build, build, build!

For Cloudflare‘s next two data centers, we head to a different continent and way-south of the equator. Throughout 2018, we’ll announce a stream of deployments across many different cities that each improve the security and performance of millions of our customers.

If you’re motivated by the idea of helping build one of the world's largest networks, come join our team!

Categories: Technology

#PressForProgress - International Women’s Day 2018 | A Cloudflare & Branch Event

Thu, 22/02/2018 - 15:46
#PressForProgress - International Women’s Day 2018 | A Cloudflare & Branch Event

Almost a year ago, I began my journey in the tech industry at a growing company called Cloudflare. I’m a 30-something paralegal and although I didn’t know how to write code (yet), I was highly motivated and ready to crush. I had worked hard for the previous two years, focused on joining a thriving company where I could grow my intelligence, further develop my skill set and work alongside successful professionals. And finally, my hard work paid off; I landed the job at Cloudflare and booked a seat on the rocket ship.

After the initial whirlwind that accompanies this fast-paced field subsided, motivation, inspiration, success, momentum and endurance began to flood my neurons. I loved the inner workings of a successful startup, felt the good and bad of the tech industry, related to and admired the female executives and most importantly, wanted to give something back to the community that adopted me.

#PressForProgress - International Women’s Day 2018 | A Cloudflare & Branch Event
Venus Approaching the Sun Source: Flickr

During a routine chat with my dad, I pitched what I thought was a crazy idea. Crazy because I was so used to being told “no” at previous jobs, used to not having my ideas taken seriously, and also used to not being given opportunities in my career. My idea was simple: “Wouldn’t it be great to have an International Women’s Day event at Cloudflare?” We talked and texted for days about the idea. It had merit and as scared as I was, I wanted to pitch it! As my dad and I discussed the idea further, it evolved into a full-blown plan of inviting renowned female influencers to attend and share their experiences and accomplishments of working in the tech industry. I wanted it to be a motivational celebration.

After receiving a quick green light from my supervisor and chatting with executives, it happened. Cloudflare got behind the event. 100 percent. And why wouldn’t they? Cloudflare relies on the best and the brightest to do what we do, no matter what. Of course Cloudflare would support an event for kick-ass women!

#PressForProgress - International Women’s Day 2018 | A Cloudflare & Branch Event
Source: Pixnio

Please join Cloudflare and Branch as we join forces to celebrate the evolution of women in technology at our first annual International Women’s Day event!

From Ada Lovelace to Grace Hopper to Katherine Johnson to the incredible panel we’ll hear from at this event, women in technology have always pressed for progress.

The road isn’t always easy to navigate, but it’s more important than ever to remain steadfast and push forward to equity and parity regardless of gender.

At this lunchtime social, we’ll take a short trip and highlight three legendary women in technology over the last 50 years, and then dive into a panel discussion with three female founders. We’ll hear a little about each one’s journey in their respective industries and touch on their view of what it means in today’s climate to press for progress. We’ll open it up at the end for Q&A from the audience.

Lunch will be provided, and there will also be time for networking and connecting with other women in technology.

#PressForProgress - International Women’s Day 2018 | A Cloudflare & Branch Event

Had this 30-something paralegal not pressed for that better job and instead bottled my voice and refrained from sharing my “crazy” idea, no progress would have been made. Working in the tech industry, and specifically for Cloudflare, has allowed me to pursue my dreams, live out my ideas and pave the way for the women of tomorrow. I’m so excited to bring this event to fruition, and I can say on behalf of Cloudflare and Branch that we hope to see you there!

Categories: Technology

Validating Leaked Passwords with k-Anonymity

Wed, 21/02/2018 - 19:00
Validating Leaked Passwords with k-Anonymity

Validating Leaked Passwords with k-Anonymity

Today, v2 of Pwned Passwords was released as part of the Have I Been Pwned service offered by Troy Hunt. Containing over half a billion real world leaked passwords, this database provides a vital tool for correcting the course of how the industry combats modern threats against password security.

I have written about how we need to rethink password security and Pwned Passwords v2 in the following post: How Developers Got Password Security So Wrong. Instead, in this post I want to discuss one of the technical contributions Cloudflare has made towards protecting user information when using this tool.

Cloudflare continues to support Pwned Passwords by providing CDN and security functionality such that the data can easily be made available for download in raw form to organisations to protect their customers. Further; as part of the second iteration of this project, I have also worked with Troy on designing and implementing API endpoints that support anonymised range queries to function as an additional layer of security for those consuming the API, that is visible to the client.

This contribution allows for Pwned Passwords clients to use range queries to search for breached passwords, without having to disclose a complete unsalted password hash to the service.

Getting Password Security Right

Over time, the industry has realised that complex password composition rules (such as requiring a minimum amount of special characters) have done little to improve user behaviour in making stronger passwords; they have done little to prevent users from putting personal information in passwords, avoiding common passwords or prevent the use of previously breached passwords[1]. Credential Stuffing has become a real threat recently; username and passwords are obtained from compromised websites and then injected into other websites until you find user accounts that are compromised.

This fundamentally works because users reuse passwords across different websites; when one set of credentials is breached on one site, this can be reused on other websites. Here are some examples of how credentials can be breached from insecure websites:

  • Websites which don't use rate limiting or challenge login requests can have a users log-in credentials breached using brute force attacks of common passwords for a given user,
  • database dumps from hacked websites can be taken offline and the password hashes can be cracked; modern GPUs make this very efficient for dictionary passwords (even with algorithms like Argon2, PBKDF2 and BCrypt),
  • many websites continue not to use any form of password hashing, once breached they can be captured in raw form,
  • Man-in-the-Middle Attacks or hijacking a web server can allow for capturing passwords before they're hashed.

This becomes a problem with password reuse; having obtained real life username/password combinations - they can be injected into other websites (such as payment gateways, social networks, etc) until access is obtained to more accounts (often of a higher value to the original compromised site).

Under recent NIST guidance, it is a requirement, when storing or updating passwords, to ensure they do not contain values which are commonly used, expected or compromised[2]. Research has found that 88.41% of users who received the fear appeal later set unique passwords, whilst only 4.45% of users who did not receive a fear appeal would set a unique password[3].

Unfortunately, there are a lot of leaked passwords out there; the downloadable raw data from Pwned Passwords currently contains over 30 GB in password hashes.

Anonymising Password Hashes

The key problem in checking passwords against the old Pwned Passwords API (and all similar services) lies in how passwords are checked; with users being effectively required to submit unsalted hashes of passwords to identify if the password is breached. The hashes must be unsalted, as salting them makes them computationally difficult to search quickly.

Currently there are two choices that are available for validating in whether a password is or is not leaked:

  • Submit the password (in an unsalted hash) to a third-party service, where the hash can potentially be stored for later cracking or analysis. For example; if you make an API call for a leaked password to a third-party API service using a WordPress plugin, the IP of the request can be used to identify the WordPress installation and then breach it when the password is cracked (such as from a later disclosure); or,
  • download the entire list of password hashes, uncompress the dataset and then run a search to see if your password hash is listed.

Needless to say, this conflict can seem like being placed between a security-conscious rock and an insecure hard place.

The Middle Way The Private Set Intersection Problem

Academic computer scientists have considered the problem of how two (or more) parties can validate the intersection of data (from two or more unequal sets of data either side already has) without either sharing information about what they have. Whilst this work is exciting, unfortunately these techniques are new and haven't been subject to long-term review by the cryptography community and cryptographic primatives have not been implemented in any major libraries. Additionally (but critically), PSI implementations have substantially higher overhead than our k-anonymity approach (particularly for communication[4]). Even the current academic state-of-the-art is not with acceptable performance bounds for an API service, with the communication overhead being equilivant to downloading the entire set of data.


Instead, our approach adds an additional layer of security by utilising a mathematical property known as k-Anonymity and applying it to password hashes in the form of range queries. As such, the Pwned Passwords API service never gains enough information about a non-breached password hash to be able to breach it later.

k-Anonymity is used in multiple fields to release anonymised but workable datasets; for example, so that hospitals can release patient information for medical research whilst withholding information that discloses personal information. Formally, a data set can be said to hold the property of k-anonymity, if for every record in a released table, there are k − 1 other records identical to it.

By using this property, we are able to seperate hashes into anonymised "buckets". A client is able to anonymise the user-supplied hash and then download all leaked hashes in the same anonymised "bucket" as that hash, then do an offline check to see if the user-supplied hash is in that breached bucket.

In more concrete terms:

Validating Leaked Passwords with k-Anonymity

In essense, we turn the table on password derrivation functions; instead of seeking to salt hashes to the point at which they are unique (against identical inputs), we instead introduce ambiguity into what the client is requesting.

Given hashes are essentially fixed-length hexadecimal values, we are able to simply truncate them, instead of having to resort to a decision tree structure to filter down the data. This does mean buckets are of unequal sizes but allows for clients to query in a single API request.

This approach can be implemented in a trivial way. Suppose a user enters the password test into a login form and the service they’re logging into is programmed to validate whether their password is in a database of leaked password hashes. Firstly the client will generate a hash (in our example using SHA-1) of a94a8fe5ccb19ba61c4c0873d391e987982fbbd3. The client will then truncate the hash to a predetermined number of characters (for example, 5) resulting in a Hash Prefix of a94a8. This Hash Prefix is then used to query the remote database for all hashes starting with that prefix (for example, by making a HTTP request to The entire hash list is then downloaded and each downloaded hash is then compared to see if any match the locally generated hash. If so, the password is known to have been leaked.

As this can easily be implemented over HTTP, client side caching can easily be used for performance purposes; the API is simple enough for developers to implement with little pain.

Below is a simple Bash implementation of how the Pwned Passwords API can be queried using range queries (Gist):

#!/bin/bash echo -n Password: read -s password echo hash="$(echo -n $password | openssl sha1)" upperCase="$(echo $hash | tr '[a-z]' '[A-Z]')" prefix="${upperCase:0:5}" response=$(curl -s$prefix) while read -r line; do lineOriginal="$prefix$line" if [ "${lineOriginal:0:40}" == "$upperCase" ]; then echo "Password breached." exit 1 fi done <<< "$response" echo "Password not found in breached database." exit 0 Implementation

Hashes (even in unsalted form) have two useful properties that are useful in anonymising data.

Firstly, the Avalanche Effect means that a small change in a hash results in a very different output; this means that you can't infer the contents of one hash from another hash. This is true even in truncated form.

For example; the Hash Prefix 21BD1 contains 475 seemingly unrelated passwords, including:

lauragpe alexguo029 BDnd9102 melobie quvekyny

Further, hashes are fairly uniformally distributed. If we were to count the original 320 million leaked passwords (in Troy's dataset) by the first hexadecimal charectar of the hash, the difference between the hashes associated to the largest and the smallest Hash Prefix is ≈ 1%. The chart below shows hash count by their first hexadecimal digit:

Validating Leaked Passwords with k-Anonymity

Algorithm 1 provides us a simple check to discover how much we should truncate hashes by to ensure every "bucket" has more than one hash in it. This requires every hash to be sorted by hexadecimal value. This algorithm, including an initial merge sort, runs in roughly O(n log n + n) time (worst-case):

Validating Leaked Passwords with k-Anonymity

After identifying the Maximum Hash Prefix length, it is fairly easy to seperate the hashes into seperate buckets, as described in Algorithm 3:

Validating Leaked Passwords with k-Anonymity

This implementation was originally evaluated on a dataset of over 320 million breached passwords and we find the Maximum Prefix Length that all hashes can be truncated to, whilst maintaining the property k-anonymity, is 5 characters. When hashes are grouped together by a Hash Prefix of 5 characters, we find the median amount of hashes associated to a Hash Prefix is 305. With the range of response sizes for a query varying from 8.6KB to 16.8KB (a median of 12.2KB), the dataset is usable in many practical scenarios and is certainly a good response size for an API client.

On the new Pwned Password dataset (with over half a billion) passwords and whilst keeping the Hash Prefix length 5; the average number of hashes returned is 478 - with the smallest being 381 (E0812 and E613D) and the largest Hash Prefix being 584 (00000 and 4A4E8).

Splitting the hashes into buckets by a Hash Prefix of 5 would mean a maximum of 16^5 = 1,048,576 buckets would be utilised (for SHA-1), assuming that every possible Hash Prefix would contain at least one hash. In the datasets we found this to be the case and the amount of distinct Hash Prefix values was equal to the highest possible quantity of buckets. Whilst for secure hashing algorithms it is computationally inefficient to invert the hash function, it is worth noting that as the length of a SHA-1 hash is a total of 40 hexadecimal characters long and 5 characters is utilised by the Hash Prefix, the total number of possible hashes associated to a Hash Prefix is 16^{45} ≈ 8.40×10^{52}.

Important Caveats

It is important to note that where a user's password is already breached, an API call for a specific range of breached passwords can reduce the search candidates used in a brute-force attack. Whilst users with existing breached passwords are already vulnerable to brute-force attacks, searching for a specific range can help reduce the amount of search candidates - although the API service would have no way of determining if the client was or was not searching for a password that was breached. Using a deterministic algorithm to run queries for other Hash Prefixes can help reduce this risk.

One reason this is important is that this implementation does not currently guarantee l-diversity, meaning a bucket may contain a hash which is of substantially higher use than others. In the future we hope to use percentile-based usage information from the original breached data to better guarantee this property.

For general users, Pwned Passwords is usually exposed via web interface, it uses a JavaScript client to run this process; if the origin web server was hijacked to change the JavaScript being returned, this computation could be removed (and the password could be sent to the hijacked origin server). Whilst JavaScript requests are somewhat transparent to the client (in the case of a developer), this may not be dependended on and for technical users, non-web client based requests are preferable.

The original use-case for this service was to be deployed privately in a Cloudflare data centre where our services can use it to enhance user security, and use range queries to compliment the existing transport security used. Depending on your risks, it's safer to deploy this service yourself (in your own data centre) and use the k-anonymity approach to validate passwords where services do not themselves have the resources to store an entire database of leaked password hashes.

I would strongly recommend against storing the range queries used by users of your service, but if you do for whatever reason, store them as aggregate analytics such that they cannot be identified back to any given user's password.

Final Thoughts

Going forward, as we test this technology more, Cloudflare is looking into how we can use a private deployment of this service to better offer security functionality, both for log-in requests to our dashboard and for customers who want to prevent against Credential stuffing on their own websites using our edge network. We also seek to consider how we can incorporate recent work on the Private Set Interesection Problem alongside considering l-diversity for additional security guarantees. As always; we'll keep you updated right here, on our blog.

  1. Campbell, J., Ma, W. and Kleeman, D., 2011. Impact of restrictive composition policy on user password choices. Behaviour & Information Technology, 30(3), pp.379-388. ↩︎

  2. Grassi, P. A., Fenton, J. L., Newton, E. M., Perlner, R. A., Regenscheid, A. R., Burr, W. E., Richer, J. P., Lefkovitz, N. B., Danker, J. M., Choong, Y.-Y., Greene, K. K., and Theofanos, M. F. (2017). NIST Special Publication 800-63B Digital Identity Guidelines, chapter Authentication and Lifecycle Management. National Institute of Standards and Technology, U.S. Department of Commerce. ↩︎

  3. Jenkins, Jeffrey L., Mark Grimes, Jeffrey Gainer Proudfoot, and Paul Benjamin Lowry. "Improving password cybersecurity through inexpensive and minimally invasive means: Detecting and deterring password reuse through keystroke-dynamics monitoring and just-in-time fear appeals." Information Technology for Development 20, no. 2 (2014): 196-213. ↩︎

  4. De Cristofaro, E., Gasti, P. and Tsudik, G., 2012, December. Fast and private computation of cardinality of set intersection and union. In International Conference on Cryptology and Network Security (pp. 218-231). Springer, Berlin, Heidelberg. ↩︎

Categories: Technology

How Developers got Password Security so Wrong

Wed, 21/02/2018 - 19:00
How Developers got Password Security so Wrong

How Developers got Password Security so Wrong

Both in our real lives, and online, there are times where we need to authenticate ourselves - where we need to confirm we are who we say we are. This can be done using three things:

  • Something you know
  • Something you have
  • Something you are

Passwords are an example of something you know; they were introduced in 1961 for computer authentication for a time-share computer in MIT. Shortly afterwards, a PhD researcher breached this system (by being able to simply download a list of unencrypted passwords) and used the time allocated to others on the computer.

As time has gone on; developers have continued to store passwords insecurely, and users have continued to set them weakly. Despite this, no viable alternative has been created for password security. To date, no system has been created that retains all the benefits that passwords offer as researchers have rarely considered real world constraints[1]. For example; when using fingerprints for authentication, engineers often forget that there is a sizable percentage of the population that do not have usable fingerprints or hardware upgrade costs.

Cracking Passwords

In the 1970s, people started thinking about how to better store passwords and cryptographic hashing started to emerge.

Cryptographic hashes work like trapdoors; whilst it's easy to hash a password, it's far harder to turn that "hash" back into the original output (or computationally difficult for an ideal hashing algorithm). They are used in a lot of things from speeding up searching from files, to the One Time Password generators in banks.

Passwords should ideally use specialised hashing functions like Argon2, BCrypt or PBKDF2, they are modified to prevent Rainbow Table attacks.

If you were to hash the password, p4$$w0rd using the SHA-1 hashing algorithm, the output would be 6c067b3288c1b5c791afa04e12fb013ed2e84d10. This output is the same every time the algorithm is run. As a result, attackers are able to create Rainbow Tables which contain the hashes of common passwords and then this information is used to break password hashes (where the password and hash is listed in a Rainbow Table).

Algorithms like BCrypt essentially salt passwords before they hash them using a random string. This random string is stored alongside the password hash and is used to help make the password harder to crack by making the output unique. The hashing process is repeated many times (defined by a difficulty variable), each time adding the random salt onto the output of the hash and rerunning the hash computation.

For example; the BCrypt hash $2a$10$N9qo8uLOickgx2ZMRZoMyeIjZAgcfl7p92ldGxad68LJZdL17lhWy starts with $2a$10$ which indicates the algorithm used is BCrypt and contains a random salt of N9qo8uLOickgx2ZMRZoMye and a resulting hash of IjZAgcfl7p92ldGxad68LJZdL17lhWy. Storing the salt allows the password hash to be regenerated identically when the input is known.

Unfortunately; salting is no longer enough, passwords can be cracked quicker and quicker using modern GPUs (specialised at doing the same task over and over). When a site suffers a security breach, users passwords can be taken offline in database dumps in order to be cracked offline.

Additionally; websites that fail to rate limit login requests or use captchas, can be challenged by Brute Force attacks. For a given user, an attacker will repeatedly try different (but common) passwords until they gain access to a given users account.

Sometimes sites will lock users out after a handful of failed login attempts, attacks can instead be targeted to move on quickly to a new account after the most common set of a passwords has been attempted. Lists like the following (in some cases with many, many more passwords) can be attempted to breach an account:

How Developers got Password Security so Wrong

The industry has tried to combat this problem by requiring password composition rules; requiring users comply to complex rules before setting passwords (requiring a minimum amount of numbers or punctuation symbols). Research has shown that this work hasn't helped combat the problem of password reuse, weak passwords or users putting personal information in passwords.

Credential Stuffing

Whilst it may seem that this is only a bad sign for websites that store passwords weakly, Credential Stuffing makes this problem even worse.

It is common for users to reuse passwords from site to site, meaning a username and password from a compromised website can be used to breach far more important information - like online banking gateways or government logins. When a password is reused - it takes just one website being breached, to gain access to others that a users has credentials for.

How Developers got Password Security so Wrong This Is Not Fine - The Nib

Fixing Passwords

There are fundamentally three things that need to be done to fix this problem:

  • Good UX to improve User Decisions
  • Improve Developer Education
  • Eliminating reuse of breached passwords
How Can I Secure Myself (or my Users)?

Before discussing the things we're doing, I wanted to briefly discuss what you can do to help protect yourself now. For most users, there are three steps you can immediately take to help yourself.

Use a Password Manager (like 1Password or LastPass) to set random, unique passwords for every site. Additionally, look to enable Two-Factor Authentication where possible; this uses something you have, in addition to the password you know, to validate you. This will mean, alongside your password, you have to enter a short-lived password from a device like your phone before being able to login to any site.

Two-Factor Authentication is supported on many of the worlds most popular social media, banking and shopping sites. You can find out how to enable it on popular websites at If you are a developer, you should take efforts to ensure you support Two Factor Authentication.

Set a secure memorable password for your password manager; and yes, turn on Two-Factor Authentication for it (and keep your backup codes safe). You can find additional security tips (including tips on how to create a secure master password) in my blog post: Simple Cyber Security Tips.

Developers should look to abolish bad practice composition rules (and simplify them as much as possible). Password expiration policies do more harm than good, so seek to do away with them. For further information refer to the blog post by the UK's National Cyber Security Centre: The problems with forcing regular password expiry.

Finally; Troy Hunt has an excellent blog post on passwords for users and developers alike: Passwords Evolved: Authentication Guidance for the Modern Era

Improving Developer Education

Developers should seek to build a culture of security in the organisations where they work; try and talk about security, talk about the benefits of challenging malicious login requests and talk about password hashing in simple terms.

If you're working on an open-source project that handles authentication; expose easy password hashing APIs - for example the password_hash, password_​needs_​rehash & password_verify functions in modern PHP versions.

Eliminating Password Reuse

We know that complex password composition rules are largely ineffective, and recent guidance has followed suit. A better alternative to composition rules is to block users from signing up with passwords which are known to have been breached. Under recent NIST guidance, it is a requirement, when storing or updating passwords, to ensure they do not contain values which are commonly used, expected or compromised[2].

This is easier said than done, the recent version of Troy Hunt's Pwned Passwords database contains over half a billion passwords (over 30 GB uncompressed). Whilst developers can use API services to check if a password is reused, this requires either sending the raw password, or the password in an unsalted hash. This can be especially problematic when multiple services handle authentication in a business, and each has to store a large quantity of passwords.

This is a problem I've started looking into recently; as part of our contribution to Troy Hunt's Pwned Passwords database, I have designed a range search API that allows developers to check if a password is reused without needing to share the password (even in hashed form) - instead only needing to send a short segment of the cryptographic hash used. You can find more information on this contribution in the post: Validating Leaked Passwords with k-Anonymity.

Version 2 of Pwned Passwords is now available - you can find more information on how it works on Troy Hunt's blog post "I've Just Launched Pwned Passwords, Version 2".

  1. Bonneau, J., Herley, C., Van Oorschot, P.C. and Stajano, F., 2012, May. The quest to replace passwords: A framework for comparative evaluation of web authentication schemes. In Security and Privacy (SP), 2012 IEEE Symposium on (pp. 553-567). IEEE. ↩︎

  2. Grassi, P. A., Fenton, J. L., Newton, E. M., Perlner, R. A., Regenscheid, A. R., Burr, W. E., Richer, J. P., Lefkovitz, N. B., Danker, J. M., Choong, Y.-Y., Greene, K. K., and Theofanos, M. F. (2017). NIST Special Publication 800-63B Digital Identity Guidelines, chapter Authentication and Lifecycle Management. National Institute of Standards and Technology, U.S. Department of Commerce. ↩︎

Categories: Technology

ជំរាបសួរ! - Phnom Penh: Cloudflare’s 122nd Data Center

Wed, 21/02/2018 - 05:04
 Cloudflare’s 122nd Data Center

 Cloudflare’s 122nd Data Center
Cloudflare is excited to turn up our newest data center in Phnom Penh, Cambodia, making over 7 million Internet properties even faster. This is our 122nd data center globally, and our 41st data center in Asia. By the end of 2018, we expect that 95% of the world's population will live in a country with a Cloudflare data center, as we grow our global network to span 200 cities.

Cambodian Internet

Home to over 16 million people, Cambodia has a relatively low base of Internet penetration (~25%) today, but is seeing an increasing number of Internet users coming online. For perspective, Cambodia has approximately the same number of Internet users as Lebanon (where we just turned up our 121st data center!) or Singapore (from where we used to serve a portion of Cambodian visitors).

In the coming weeks, we’ll further optimize our routing for Cloudflare customers and expect to see a growing number of ISPs pick up our customers’ traffic on a low latency path.

 Cloudflare’s 122nd Data Center
Latency from a Cambodian ISP (SINET) to Cloudflare customers decreases 10x

Coming up next

Next up, in fact, thousands of feet further up, we head to the mountains for Cloudflare’s 123rd data center. Following that, two upcoming Cloudflare data centers are located well south of the Equator, and a continent away.

Categories: Technology

Using Go as a scripting language in Linux

Tue, 20/02/2018 - 19:49
Using Go as a scripting language in Linux

At Cloudflare we like Go. We use it in many in-house software projects as well as parts of bigger pipeline systems. But can we take Go to the next level and use it as a scripting language for our favourite operating system, Linux?
Using Go as a scripting language in Linux
gopher image CC BY 3.0 Renee French
Tux image CC0 BY OpenClipart-Vectors

Why consider Go as a scripting language

Short answer: why not? Go is relatively easy to learn, not too verbose and there is a huge ecosystem of libraries which can be reused to avoid writing all the code from scratch. Some other potential advantages it might bring:

  • Go-based build system for your Go project: go build command is mostly suitable for small, self-contained projects. More complex projects usually adopt some build system/set of scripts. Why not have these scripts written in Go then as well?
  • Easy non-privileged package management out of the box: if you want to use a third-party library in your script, you can simply go get it. And because the code will be installed in your GOPATH, getting a third-party library does not require administrative privileges on the system (unlike some other scripting languages). This is especially useful in large corporate environments.
  • Quick code prototyping on early project stages: when you're writing the first iteration of the code, it usually takes a lot of edits even to make it compile and you have to waste a lot of keystrokes on "edit->build->check" cycle. Instead you can skip the "build" part and just immediately execute your source file.
  • Strongly-typed scripting language: if you make a small typo somewhere in the middle of the script, most scripts will execute everything up to that point and fail on the typo itself. This might leave your system in an inconsistent state. With strongly-typed languages many typos can be caught at compile time, so the buggy script will not run in the first place.
Current state of Go scripting

At first glance Go scripts seem easy to implement with Unix support of shebang lines for scripts. A shebang line is the first line of the script, which starts with #! and specifies the script interpreter to be used to execute the script (for example, #!/bin/bash or #!/usr/bin/env python), so the system knows exactly how to execute the script regardless of the programming language used. And Go already supports interpreter-like invocation for .go files with go run command, so it should be just a matter of adding a proper shebang line, something like #!/usr/bin/env go run, to any .go file, setting the executable bit and we're good to go.

However, there are problems around using go run directly. This great post describes in detail all the issues around go run and potential workarounds, but the gist is:

  • go run does not properly return the script error code back to the operating system and this is important for scripts, because error codes are one of the most common ways multiple scripts interact with each other and the operating system environment.
  • you can't have a shebang line in a valid .go file, because Go does not know how to process lines starting with #. Other scripting languages do not have this problem, because for most of them # is a way to specify comments, so the final interpreter just ignores the shebang line, but Go comments start with // and go run on invocation will just produce an error like:
package main: helloscript.go:1:1: illegal character U+0023 '#'

The post describes several workarounds for above issues including using a custom wrapper program gorun as an interpreter, but all of them do not provide an ideal solution. You either:

  • have to use non-standard shebang line, which starts with //. This is technically not even a shebang line, but the way how bash shell processes executable text files, so this solution is bash specific. Also, because of the specific behaviour of go run, this line is rather complex and not obvious (see original post for examples).
  • have to use a custom wrapper program gorun in the shebang line, which works well, however, you end up with .go files, which are not compilable with standard go build command because of the illegal # character.
How Linux executes files

OK, it seems the shebang approach does not provide us with an all-rounder solution. Is there anything else we could use? Let's take a closer look how Linux kernel executes binaries in the first place. When you try to execute a binary/script (or any file for that matter which has executable bit set), your shell in the end will just use Linux execve system call passing it the filesystem path of the binary in question, command line parameters and currently defined environment variables. Then the kernel is responsible for correct parsing of the file and creating a new process with the code from the file. Most of us know that Linux (and many other Unix-like operating systems) use ELF binary format for its executables.

However, one of the core principles of Linux kernel development is to avoid "vendor/format lock-in" for any subsystem, which is part of the kernel. Therefore, Linux implements a "pluggable" system, which allows any binary format to be supported by the kernel - all you have to do is to write a correct module, which can parse the format of your choosing. And if you take a closer look at the kernel source code, you'll see that Linux supports more binary formats out of the box. For example, for the recent 4.14 Linux kernel we can see that it supports at least 7 binary formats (in-tree modules for various binary formats usually have binfmt_ prefix in their names). It is worth to note the binfmt_script module, which is responsible for parsing above mentioned shebang lines and executing scripts on the target system (not everyone knows that the shebang support is actually implemented in the kernel itself and not in the shell or other daemon/process).

Extending supported binary formats from userspace

But since we concluded that shebang is not the best option for our Go scripting, seems we need something else. Surprisingly Linux kernel already has a "something else" binary support module, which has an appropriate name binfmt_misc. The module allows an administrator to dynamically add support for various executable formats directly from userspace through a well-defined procfs interface and is well-documented.

Let's follow the documentation and try to setup a binary format description for .go files. First of all the guide tells you to mount special binfmt_misc filesystem to /proc/sys/fs/binfmt_misc. If you're using relatively recent systemd-based Linux distribution, it is highly likely the filesystem is already mounted for you, because systemd by default installs special mount and automount units for this purpose. To double-check just run:

$ mount | grep binfmt_misc systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)

Another way is to check if you have any files in /proc/sys/fs/binfmt_misc: properly mounted binfmt_misc filesystem will create at least two special files with names register and status in that directory.

Next, since we do want our .go scripts to be able to properly pass the exit code to the operating system, we need the custom gorun wrapper as our "interpreter":

$ go get $ sudo mv ~/go/bin/gorun /usr/local/bin/

Technically we don't need to move gorun to /usr/local/bin or any other system path as binfmt_misc requires full path to the interpreter anyway, but the system may run this executable with arbitrary privileges, so it is a good idea to limit access to the file from security perspective.

At this point let's create a simple toy Go script helloscript.go and verify we can successfully "interpret" it. The script:

package main import ( "fmt" "os" ) func main() { s := "world" if len(os.Args) > 1 { s = os.Args[1] } fmt.Printf("Hello, %v!", s) fmt.Println("") if s == "fail" { os.Exit(30) } }

Checking if parameter passing and error handling works as intended:

$ gorun helloscript.go Hello, world! $ echo $? 0 $ gorun helloscript.go gopher Hello, gopher! $ echo $? 0 $ gorun helloscript.go fail Hello, fail! $ echo $? 30

Now we need to tell binfmt_misc module how to execute our .go files with gorun. Following the documentation we need this configuration string: :golang:E::go::/usr/local/bin/gorun:OC, which basically tells the system: "if you encounter an executable file with .go extension, please, execute it with /usr/local/bin/gorun interpreter". The OC flags at the end of the string make sure, that the script will be executed according to the owner information and permission bits set on the script itself, and not the ones set on the interpreter binary. This makes Go script execution behaviour same as the rest of the executables and scripts in Linux.

Let's register our new Go script binary format:

$ echo ':golang:E::go::/usr/local/bin/gorun:OC' | sudo tee /proc/sys/fs/binfmt_misc/register :golang:E::go::/usr/local/bin/gorun:OC

If the system successfully registered the format, a new file golang should appear under /proc/sys/fs/binfmt_misc directory. Finally, we can natively execute our .go files:

$ chmod u+x helloscript.go $ ./helloscript.go Hello, world! $ ./helloscript.go gopher Hello, gopher! $ ./helloscript.go fail Hello, fail! $ echo $? 30

That's it! Now we can edit helloscript.go to our liking and see the changes will be immediately visible the next time the file is executed. Moreover, unlike the previous shebang approach, we can compile this file any time into a real executable with go build.

Whether you like Go or digging in Linux internals, we have positions for either or these and even both of them at once. Check-out our careers page.

Categories: Technology

Keeping our users safe

Fri, 16/02/2018 - 22:30
Keeping our users safe

To everyone in Cloudflare, account security is one of our most important tasks. We recognize that to every customer on our platform, we are critical infrastructure. We also know that the simplest attacks often lead to the most devastating of outcomes. Most people think that if they are going to get hacked it will be by some clever ”zero day”. The reality couldn’t be farther from the truth.

Attackers are smart and they have realized that even in 2018, the human is still the weakest link in the chain. The 2017 Verizon breach report identified that 81% of hacking related breaches occurred as a result of weak credentials or credential theft, an increase from the 63% reported in 2016’s breach report.

Keeping our users safe

Source: Verizon 2017 data breach report

Your credentials are as important as your house or car keys. If someone copies or steals them, the repercussions can be catastrophic. If you suspect someone has access to your house keys you change your locks. If you aren’t fast enough, someone might break in.

Likewise if you realize that someone might have access to your password, the remedy is to change it. Too often, as with house keys, we are slow to change our passwords. This is why we see so many account compromises happen after a big public breach like Yahoo. As soon as the attacker gets their hands on a cache of credentials from a breach they immediately try them against every other account the user has. They know that a very high percentage of users still reuse the same credentials across multiple sites. So their chances of success are high - sometimes years after a breach has happened.

This is especially problematic with API keys because they often leak without anyone ever knowing it’s happened. Common examples include attackers who steal API keys by reversing client software, or using malicious code inside the browser to navigate and steal secrets - often long after the user has logged out. Some banking trojans such as the infamous Zeus even continue to browse after you have logged out, and display fake balance information to hide withdrawals it makes. This is why many banks now force you to re-authenticate often with 2FA when you log in and again when you perform any kind of financial transaction.

Keeping our users safe

The Trojan Horse. (It’s not a new attack)

Attackers realize this too and they have been steadily upping their game to catch unsuspecting users. Not content to just sit and wait, attackers constantly try to grab credentials through a combination of trickery and sophisticated software attacks. The one thing that links all of this? It’s you that they are attacking, not some server sitting in a datacenter.


Phishing emails remain the most popular method but they have now evolved and range from the mundane badly spelled email asking for your password or offering you a link to some unknown site...

Keeping our users safe

Common Cloudflare phishing email

… to more sophisticated, professionally articulated fakes that deliver complex payloads which both harvest your sensitive data and compromise your system in a single devastating click. Cloudflare, like most security aware companies, will never send you an email that takes you to another site, or which asks you for your password or API key. If you see one, please report it to us through support or our abuse process. That way we can work with hosting companies, ISPs and infrastructure providers dismantle the systems behind it and neutralize the campaign.

Keeping our users safe

A more sophisticated Cloudflare phishing email

Malicious software

Stepping up from phishing emails, we now have to face malicious software hidden inside innocuous everyday packages from browser extensions to free games. The most popular and difficult to detect of these by far are the malicious browser extensions. Often these start out as legitimate browser tools, then either they get hacked, or an attacker simply buys a forgotten extension project and injects malicious code into it. Now, anything your browser can see, the malicious extension can also see.

The most sophisticated of these are often targeted at a particular institution and know how to silently navigate to the credentials they wish to steal. Over the past couple of years we have seen several of these targeted specifically at Cloudflare customers. Below is an example of one such campaign from late last year involving a very popular Chrome extension called “Web Developer”.

Keeping our users safe

Web Developer for Chrome breach alert

In August 2017, the “Web Developer” for Chrome Extension was hacked. The attacker(s) compromised the developer’s account and modified version 0.49 to include a malicious payload that specifically targeted Cloudflare users.

The malicious code injected into the extension was designed to be as stealthy and as resilient to attempts to kill it as possible.

  • First it checks to make sure its been installed for at least 10 minutes.
  • If it determines the coast is clear, it connects to a machine generated domain name, also called a DGA or “Domain Generation Algorithm” domain. These are random seeming domain names that are actually made by an algorithm so that they are harder to find and the attacker can automatically change to a new domain if the old one gets shut down, without changing any code. On August 2nd 2017 the DGA domain for this malicious extension was “wd7bdb20e4d622f6569f3e8503138c859d[.]win”. By August 3rd it had changed to “wd8a2b7d68f1c7c7f34381dc1a198465b4[.]win” as you can imagine this makes predicting new domains very hard unless you break the code behind the DGA algorithm.
  • If the connection is successful it downloads a fresh payload, ga.js over HTTPS which is meant to fool anyone that sees it into thinking that it is downloading Google analytics.

Keeping our users safe

Example code downloaded by the malicious version of Web Developer

The code it downloads is heavily obfuscated. This is to make it even harder to detect what is going on. However if you decode it what you find is that this code designed to pull down yet more malicious payloads such as this one to navigate the Cloudflare dashboard site and retrieve a user’s API key. It literally waits for a user to log into Cloudflare and then steals the user’s API key by accessing sensitive pages behind their back. Some variants even do this after the user has logged out. They show the user a fake logged out page and then carry on silently pillaging the account for all its secrets.

Keeping our users safe

The Cloudflare API Key stealing payload downloaded by the malicious extension.

Web Developer for Chrome wasn’t the only extension compromised in that particular campaign. Other extensions compromised included

  • Chrometana – Version 1.1.3
  • Infinity New Tab – Version 3.12.3
  • CopyFish – Version 2.8.5
  • Web Paint – Version 1.2.1
  • Social Fixer 20.1.1 affected
  • TouchVPN
  • Betternet VPN

All these extensions have since been updated, but anyone with an older version should consider themselves at risk. How did this happen? Phishing of course:

Keeping our users safe

Phishing email sent to Chrome developer

If you want to read more about this, our friends over at proofpoint have an in depth teardown of all the other aspects of this particular campaign.

How we protect credentials
What we do today - how we store them.
System security

We have built our systems to be secure by design. In some cases, this is as simple as ensuring that sensitive data is restricted, or made completely unobtainable. In other cases this has meant building systems in a way that makes them secure against a wide range of common attacks.

Passwords We store user passwords in a secure database using a complex, salted hash that uses the blowfish based bcrypt() hashing algorithm.

API Keys We ensure that API keys are unique by generating them using a combination of AES (Rijndael 256), SHA256 and plenty of entropy. Once generated through these hashing algorithms, API Keys are also stored in a secure database that only a handful of people have access to.

What we do today - how we handle credentials.
At the back-end

All calls to these sensitive databases are audited, and stored in logs that go back to the very beginning of Cloudflare. In one recent audit exercise we were able to review, and determine the exact time an API Key was generated for a customer in 2013. Access to both the audit logs and critical systems, like our databases, is restricted to a handful of senior production engineers and security staff. All staff access is also logged, and stored securely for audit purposes.

Finally, all programmatic calls to these databases are made through stored procedures linked to dedicated accounts. Dynamic SQL is not permitted.

In Transit

All connections are made over HTTPS and sensitive tokens or credentials are never exposed in transit.

In the User Interface
To access your dashboard you need your account email, your password, and assuming you enabled it, (you really should if you haven’t) your 2FA code. If all of this is correct, and the IP you are using is one you are known to use, you are logged in. If the IP address isn’t known we send an email alert to your registered email address alerting you to the event.

Keeping our users safe

Screenshot of an IP alert generated by accessing the dashboard

If your account is Pro, Business or Enterprise, that email also contains a “Multi Factor Authentication” or MFA code. Until this code is typed in, attempts to log-in are blocked.

Keeping our users safe

MFA Alert generated by accessing the dashboard

Make sure the email registered to your Cloudflare account is correct and that it is one whose emails you can quickly see for maximum benefit. The faster you react to an alert the better!

API Keys

API Keys are very sensitive things. They have as much power as your password but benefit from relatively few of the protections built into a modern browser. This is why we are deepening our investment to make our API even more secure.

Our first improvement, released last week, was to protect the API key against malicious software - such as a malicious browser extension - by adding a CAPTCHA to the “View API Key” feature. Below is a screenshot of the challenge you will now see when attempting to access your API Key. This change means that even if malicious software manages to steal your password, it cannot easily request and harvest your API Key.

Keeping our users safe

Updated “View API Key” user experience

Next, we are looking at looked at scoped API keys. These are keys which can either be restricted to authorized IP addresses or limited in terms of what they can do. At the same time we will be adding the ability to turn the API off completely for your account. Finally, looking to the future, we are exploring options like other technical frameworks and token types, so that you have even better tools to build an API architecture that that is secure by design.

How you can help keep your account safe
  • Turn on 2FA and ensure your email address is correct The sooner you see an alert, the faster you can take action to lock your account down.
    Handle your credentials with care, NEVER enter them into a site other than if in doubt check the website certificate fingerprints:

SHA 256 - 12 C4 A5 74 7E D5 6E 37 2C 87 89 02 25 E4 CD 51 89 6D 8E AD 7D 55 CF 76 BF D1 9B 6B 74 6C 70 D0 SHA1 - D4 AD AB 1B 95 72 8D 3D 6E 26 4A 70 70 B1 1E 88 2F CA 71 67
  • Check your browser extensions regularly. If you see any you don’t recognize remove them immediately. Remember that like all software, regular updates to browser extensions are important too.
  • Design your client or application to protect your API credentials. If your API design is weak - for example it doesn’t do proper certificate validation, or even worse transmit any of the data in plaintext at any point in the connection - then you are risking disaster.
  • Change your API key regularly, especially if you have any concern that it may have been exposed.
  • Do not store your API key (or any credentials for that matter) in public repositories. One way to ensure this is by making sure you don’t store your API key in your application source tree. A significant number of accidental leaks to GitHub happen because this gets overlooked.
  • Be very careful when embedding keys or credentials in clients that you will expose publically. Do not expose your account API key this way. Reverse engineering is not hard and many organizations have learnt this the hard way.
  • Review your code before you release it. Use a tool that’s part of your CI or Build processes to automatically check for accidental key leakage. How many times have we seen API Keys for major institutions leak because they were accidentally checked into GitHub? Don’t be that person.
  • IBM has some great additional security guidance for organizations building with APIs here,
  • Finally, it may seem obvious but take care when clicking on links in emails. Even experienced Chrome developers get tricked sometimes!
Categories: Technology

HTTPS or bust: Chrome’s plan to label sites as "Not Secure"

Wed, 14/02/2018 - 20:00

Google just announced that beginning in July 2018, with the release of Chrome 68, web pages loaded without HTTPS will be marked as “not secure”.

More than half of web visitors will soon see this warning when browsing unencrypted HTTP sites, according to data from Cloudflare’s edge that shows 56.62% of desktop requests originate from Chrome. Users presented with this warning will be less likely to interact with these sites or trust their content, so it’s imperative that site operators not yet using HTTPS have a plan to do so by July.

How did we get here (and why)?

To those who have followed the Chrome team’s public statements, this announcement comes as no surprise. Google has been gearing up for this change since 2014, as Chrome boss Parisa Tabriz tweeted and Chris Palmer memorialized in a widely distributed email. While this step is an important and potentially jarring one for users, it’s by no means the last step that Google will take to influence website administrator behavior for the better.

But why are they making this change (now)? Google’s primary motivation for driving HTTPS adoption is simple: a safe browsing experience is good for business. Users that feel safe on the web spend more time viewing and interacting with ads and other services that Google gets paid to deliver. (To be clear: these motivations do not in any way diminish the outstanding work of the Chrome team, whose members are passionate about protecting users for a myriad of non-business reasons. We applaud their efforts in making the web a safer place and are excited to see other browsers follow their lead.)

Google must feel the time is right to make the change thanks to HTTPS page loads continuing to climb steadily and minimal fallout from their previous, incremental steps. Emily Schechter, the Chrome Security Product Manager who announced the change, writes: “we believe https usage will be high enough by july [2018] that this will be OK”. Currently, the ratio of user interaction with secure origins to non-secure sits at 69.7%; five months ago it was just 62.5% and thus it’s easy to imagine Chris Palmer’s suggested threshold of 75% will have been met by July.

Such a change would have been far too disruptive just one year ago, but thanks to the efforts of Google and other participants in the webPKI ecosystem (including Cloudflare), a path has been paved towards 100% adoption. Today, HTTPS is fast, simple to deploy, and cost-effective if not free—and there’s no longer an excuse for not using SSL/TLS. Even static sites need encryption to prevent malicious third-parties from tracking your users or injecting ads into your site.

Important milestones towards HTTPS ubiquity, alongside percent of page loads using HTTPS #mytable { margin: 1.5em auto; border-collapse: collapse; border: 1px solid #e0e0e0; border-bottom: 1px solid #d0d0d0; background: #fff; } #mytable * { background: transparent; } #mytable th { font-weight: bold; } #mytable td, th { border: 1px solid #e0e0e0; vertical-align: top; padding: 10px; } #mytable tr td:last-child { text-align: right; } Date Action % HTTPS1 2H 2013 NSA: Edward Snowden releases thousands of pages of classified documents, confirming that the NSA has been passively collecting plaintext communication. At the time, very few sites used HTTPS by default, including the traffic between Google's data centers, making it far easier for these communications to be monitored. ~25% 2014/08/06 Google publishes a blog post disclosing that they're starting to use the availability of a site over HTTPS as a positive ranking signal for SEO purposes. 31.7% 2014/09/24 Cloudflare announces Universal SSL, which provides free SSL certificates and SSL/TLS termination to the then-two million sites on our network. 31.8% 2014/12/12 Google's Chris Palmer emails blink-dev with "Proposal: Marking HTTP As Non-Secure". This original proposal has been memorialized here. 32.3% 2015/02/26 Google's Joel Weinberger emails the blink-dev mailing list with an "Intent to deprecate" for certain features unless used with secure origins (i.e., HTTPS). Initially this list includes: device motion/orientation, EME, fullscreen, geolocation, and getUserMedia. 33.7% 2015/04/30 Mozilla's Richard Barnes publishes "Deprecating Non-Secure HTTP", announcing Mozilla's intent to eventually "phase out non-secure HTTP" from Firefox. 35.4% 2015/10/19 ISRG's Josh Aas announces that Let's Encrypt, a new free CA, is now trusted by all major browsers, thanks to a cross-sign from IdenTrust. 37.9% 2015/12/03 Let's Encrypt officially launches into public beta. 39.5% 2016/06/14 Apple announces at WWDC16 that, by the end of 2016, the App Store will require that applications be built with App Transport Security (ATS) in order to be accepted. ATS prohibits the use of plaintext HTTP and thus helps drive the adoption of HTTPS. 45.0% 2016/06/22 Google's Adrienne Porter Felt et al. present Rethinking Connection Security Indicators at USENIX's Twelfth Symposium On Usable Privacy and Security. In this paper Adrienne and team "select and propose three indicators", which have already been adopted by Chrome (including the "Not secure" label). 45.1% 2016/09/08 Google's Emily Schechter publishes Moving towards a more secure web, in which she writes that "Beginning in January 2017 (Chrome 56), we'll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure."

She also reiterates Google's plan to eventually "label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS."

44.8% 2017/01/20 Mozilla: A post on the Mozilla Security Blog titled Communicating the Dangers of Non-Secure HTTP informs users that in upcoming releases, Firefox will show an in-context message when a user clicks into a username or password field on a page that doesn't use HTTPS". Firefox's in-context warnings are even more prominent than those implemented by Chrome. 50.78% 2017/01/31 Google: As announced in September of 2016, Chrome 56 begins marking pages as "Not secure" if they i) contain a password field or ii) if a user interacts with a credit card field. 51.9% 2017/03/30 Cloudflare: To assist SaaS providers in driving HTTPS adoption for their customers' custom/vanity domains, Cloudflare announces our SSL for SaaS Provider offering. Historically, it has been difficult and time consuming for SaaS providers to obtain (and renew) SSL certificates on behalf of their end-users, and thus very few offered free SSL for all customers. 55.1% 2017/04/27 Google's Emily Schecter announces that "Beginning in October 2017, Chrome will show the "Not secure" warning in two additional situations: when users enter data on an HTTP page, and on all HTTP pages visited in Incognito mode." 56.3% 2018/01/15 Mozilla's Anne van Kesteren publishes a blog post "Secure Contexts Everywhere" in which he explains that "effective immediately, all new features that are web-exposed are to be restricted to secure contexts". 69.9% 2018/02/08 Google's Emily Schecter writes that "Beginning in July 2018 with the release of Chrome 68, Chrome will mark all HTTP sites as "not secure". 69.7%

1 % of pages loaded over HTTPS by Firefox, 14-day moving average. Source: Firefox Telemetry data and Let's Encrypt. Google also publishes figures on Chrome:

What’s coming next? What should I expect after July 2018?

The "lock" in the address bar was always about motivating sites to migrate to HTTPS, but along the way studies showed that positive trust indicators don’t work. Google’s introduction of "Not secure" is one important step towards the ultimate goal of deprecating HTTP, but as mentioned earlier, will not be their last.

We expect Google’s assault on HTTP to continue throughout the year, culminating with an announcement that the lock will be removed entirely (and replaced by a negative indicator shown only when a site does not utilize HTTPS). Below is some additional detail on this expected next step, along with some additional predictions for the webPKI ecosystem.

1. Google will announce the lock icon’s demise in 2018 and remove it in January 2019 with the release of Chrome 72

Chris Palmer’s email to blink-dev in 2014 included this "strawman proposal" for introducing negative indicators and phasing out the marking of secure origins entirely:

Secure > 65%: Non-secure origins marked as Dubious
Secure > 75%: Non-secure origins marked as Non-secure
Secure > 85%: Secure origins unmarked

True to plan, Chrome 68 will go stable right around the time HTTPS page loads reach 75%. (Our initial forecast for this date, based on connections to our edge and telemetry data from Firefox, was 74.8%; however, we expect last week’s Chrome announcement to accelerate this ratio to >75% before July 24.)

Looking forward, the estimated stable dates for future Chrome releases are as follows:

  • Chrome 69 – September 4, 2018
  • Chrome 70 – October 16, 2018
  • Chrome 71 – December 4, 2018

At approximately 6-7 weeks between stable releases, Chrome 72 should hit sometime in late January 2019. By this time, we expect HTTPS page loads to be >85%, a high enough ratio where Google can be confident the change won’t be too disruptive. Given the significance of this UI change we expect they’ll announce it sometime in mid 2018.

2. Firefox will soon announce their own schedule for marking insecure origins

Google is not the only major browser taking steps to drive the web to HTTPS only.

Back in April 2015, the Mozilla team announced their intent to (eventually) deprecate HTTP. Since then, Firefox has adopted similar UI indications to Chrome for pages with passwords, announced that “all new features that are web-exposed are to be restricted to secure contexts”, and merged (default disabled) code to mark sites loaded over HTTP as “not secure”.

Beginning in Firefox 59, this “not secure” labeling can be manually enabled (instructions shown below), but no date has officially been set by Mozilla for when this will turned on by default. We expect them to announce a date shortly.

3. Microsoft and Apple will continue to lag Google and Mozilla, but will start to enact similar changes

Historically, Microsoft and Apple have moved slower in adopting new browser security policies due in part to the fact they release (and update) their browsers far less frequently than Google and Mozilla.

However, Apple in the past shown shown leadership in driving HTTPS adoption, as can be seen by their WWDC2016 announcement requiring iOS applications to use ATS and TLS 1.2. Our hope is that Microsoft and Apple follow Google and Mozilla’s lead as Edge, IE, and Safari collectively represent almost 20% of the desktop requests hitting Cloudflare’s edge.

4. Browsers will start attempting connections over HTTPS before trying HTTP

Analogous to how Apple prioritizes IPv6 over IPv4, major browsers will start to try addresses entered without a scheme over HTTPS before falling back to HTTP. The Google Chrome team has already indicated they plan to do this, and we expect (hope!) they’ll announce a timeline for this change sometime in 2018.

5. More CAs (including nascent ones) will follow Let’s Encrypt’s lead in issuing free certs using the ACME protocol

One of the primary complaints from site operators as they react to Chrome and Firefox’s user-facing changes (with the potential to affect their traffic) is that “SSL certificates are expensive”. Even though Cloudflare began issuing free certificates for our reverse proxy users in late 2014 and Let’s Encrypt followed not too long after, there still aren’t many other easy, free options available.

We expect that additional CAs will begin to embrace the ACME protocol for validation and issuance, helping to harden the protocol and increase its adoption. We further expect that new, free-of-charge CAs will enter the market and at least one will be operated by a large, well-funded incumbent such as Google.

6. The CA/B Forum will vote in 2018 to further reduce certificate lifetimes from 27 months to 18 months or less, encouraging more automation

The CA/Browser Forum is a group of CAs and browsers that collaborate on (among other things) a document known as the "Baseline Requirements" or the "BRs". These BRs dictate the minimum requirements that CAs are to adhere to, and a recent change to them goes into affect March 1, 2018; as of that date, the maximum validity period for a certificate drops from 39 months to ~27 months (825 days).

The initial proposal, by Ryan Sleevi of Google, was to reduce the lifetime to 12 months, but this was met with strong opposition by the CAs (outside of Let's Encrypt which already caps lifetimes at 3 months and DigiCert who supported 13 months). A compromise was reached and goes into affect shortly, but we expect this topic to come to a vote again for either 18 or 13 months. CAs will again likely oppose this cap (with a few exceptions for the more automated ones), but browsers and root trust store operators may force the change anyway as it strengthens user security.

As site operators manually replace their expiring 3 year certificates, our hope and expectation is that they see the benefits and ease of automating the certificate lifecycle, encouraging them to deploy HTTPS more broadly across their organizations.

OK, I understand what’s going to happen, but how can I tell now if my site is going to show a warning in July?

The simplest way to tell if your site will soon show a "Not secure" label is by viewing it in a current version of Chrome or Firefox. If you do not see a lock, your site will soon have a more ominous warning. To get a preview of this warning, you can try browsing your site with a development version of either of these browsers by following the instructions below.

  • Use Chrome 65 or later.
    • The easiest way to do this is to install Chrome Canary, which runs bleeding edge builds. Alternatively, you can install the dev channel alongside stable, but this can be confusing to launch as the applications look identical.
  • Browse to chrome://flags/#enable-mark-http-as.
  • Change the setting from Default to Enabled and click the RELAUNCH NOW button.
  • Browse to a site that does not use HTTPS, such as


Firefox has not announced a date yet when this change will go into effect, but you can preview what it will look like when they do.

  • Use Firefox 59 or later.
  • Enter "about:config" in the address bar.
  • Click "I accept the risk!" to view the advanced config.
  • Search for “security.insecure_connection” and flip all false values to true.
  • Browse to a site that does not use HTTPS, such as

What can I do to avoid this warning?

Quite simply, all you need to do to avoid this warning is protect your site with HTTPS using a valid SSL certificate. Cloudflare makes it incredibly simple to do this.

If you sign up with us and point your nameservers to Cloudflare, we take care of the rest for free: validating your domain with one of our Certificate Authority partners, issuing a certificate that covers the apex of your domain and any subdomains (e.g., and *, deploying that certificate to our 120+ data centers around the world for optimal performance, and renewing the certificate automatically when needed.

If you’re not able to sign up with us directly, for example you’re using a subdomain of a SaaS provider that has not yet deployed HTTPS for all users, you may want to suggest they look at our SSL for SaaS Providers offering.

Lastly, if you want to help others avoid these warnings, we're hiring Software Engineers and Product Managers on the Security Engineering team at Cloudflare. Check out our open positions here and come help us drive HTTPS adoption to 100%!

Categories: Technology

Cloudflare ♥ Open Source: upgrade to Pro Plan on the house

Wed, 14/02/2018 - 19:11
 upgrade to Pro Plan on the house

Happy Valentine's Day, Internet!

There’s a special place in our heart for all the open source projects that support the Internet and improve the lives of everyone in the developer community, and today seems like an appropriate time to express the gratitude we have for the non-profit / volunteer-run projects that hold everything together.

Cloudflare uses a lot of open source software and also contributes to open source. Informally, Cloudflare has already been upgrading the plans of certain eligible open source projects that have reached out to us or that we have interfaced with. Here are some of the projects whose landing pages are already protected by Cloudflare.

 upgrade to Pro Plan on the house
A subset of open source projects on Cloudflare. See more >>

To really pay the goodwill forward, we want to make this opportunity common knowledge in the developer community. In 2018, we intend to provide free Cloudflare Pro Plan upgrades to eligible open source projects (subject to a case-by-case evaluation) that:

  1. provide engineering tools or resources to the developer community; and
  2. are volunteer-run or working on a non-profit basis.

Are you an open source project using @Cloudflare? We want to give you a free Pro Plan to thank you for your work for the community.  ???? -> Please RT!

— Cloudflare (@Cloudflare) January 30, 2018

Making core contributions to a qualifying open source project? Drop a line to with a link to the project’s landing page, repo, and a description of what engineering tools or resources your project provides to the developer community.

And please RT this opportunity, and share it with all the open source contributors in your life. Thanks!

Categories: Technology

Why I’m helping Cloudflare grow in Asia

Wed, 14/02/2018 - 01:00
Why I’m helping Cloudflare grow in Asia

I’m excited to announce that I’ve joined Cloudflare as Head of Asia. This is an important time for the company as we continue to grow our presence in the region and build on the successes we’ve already had in our Singapore office. In this new role, I’m eager to grow our brand recognition in Asia and optimize our reach to clients by building up teams and channel partners.

A little about me

I’m a Californian with more than 20 years of experience growing businesses across Asia. I initially came to Asia with the Boston Consulting Group and since then I’ve helped Google and Twitter start and grow their businesses in Singapore and Asia. In many cases throughout my career, I’ve been one of the very first employees (sometimes the first) on the ground in this part of the world. To me, the Asian market presents an often untapped opportunity for companies looking to expand, and it’s a challenge that has appealed to me throughout my career.

Why I’m helping Cloudflare grow in AsiaThis year's Chinese New Year celebration

Why Cloudflare?

I’m driven by opportunities to work with global businesses that drive change and are full of ambitious and passionate people. Cloudflare’s mission is to help build a better Internet and the company is focused on democratizing Internet tools that were once only available to large companies. Making security and speed, which are necessary for any strong business, available to anyone with an Internet property, is truly a noble goal. That’s one of the reasons I’m most excited to work with Cloudflare.

Cloudflare is also serious about culture and diversity, an area that’s very important to me. When I was considering joining Cloudflare, I watched videos from the Internet Summit, an annual event that Cloudflare hosts in its San Francisco office (we will be hosting a London version as well this year). One thing that really stood out for me is that nearly half of the speakers were women and all of the speakers came from different backgrounds. The topics could have been covered by a much more homogeneous group of men, but Cloudflare went the extra mile to make sure more diverse perspectives were represented. I’m extremely passionate about encouraging women to pursue opportunities in business and tech so watching so many women give insightful talks made me realize that this was a company I wanted to work for.

Cloudflare Singapore

Now for a little about our work in the region. Cloudflare’s Singapore office opened more than two years ago and has more than 40 employees. Employees here hail from 16 countries and I’m proud to say that the Singapore office has the highest percentage of women.

Functions in Asia include, Solutions Engineering, Site Reliability Engineering, Network Operations, Recruiting, Product Development, Operations, Customer Success, and Technical Support. Our team here has made significant contributions in building Cloudflare’s performance and security products, features, and capabilities.

Why I’m helping Cloudflare grow in AsiaCelebrating Cloudflare's 7th birthday in Singapore

The Singapore team has also had great success serving Cloudflare’s regional customers. We have enterprise customers across all of Asia and across all verticals.

Much of the success in the Singapore office can be attributed to so much effort from all of our pioneering team, especially our first three employees in Singapore: Jimmy, Frankie, and Mark. I’d also like to call out Colin, head of our Sales team in Asia, James, our Solutions Engineering lead in Asia, and Grace Lin, who founded and led our Singapore office for the past two years, commuting back and forth from San Francisco to manage the office. I thank them for all of their hard work in growing Cloudflare’s presence in Asia and I’m excited to work alongside them in this next stage of growth.

Our opportunities in Singapore and beyond

I’m truly looking forward to helping Cloudflare grow its reach over the next five years.

If you’re interested in exploring careers at Cloudflare, we are hiring globally! Our team in Singapore is looking to expand across the region for roles in Systems Reliability Engineering, Network Engineering, Technical Support Engineering, Solutions Engineering, Customer Success Engineering, Recruiting, Account Executives, Business Development Representatives, Sales Operations, Business Operations, and more. Check out our careers page to learn more!

Categories: Technology

Marhaba Beirut! Cloudflare’s 121st location - مرحبا بيروت! موقع “كلاودفلار” ال ١٢١

Tue, 13/02/2018 - 07:00
Marhaba Beirut! Cloudflare’s 121st location - مرحبا بيروت! موقع “كلاودفلار” ال ١٢١

Lebanon is a historic country, home to two cities among the oldest in the world. There’s a vast mix of influences from the East and West. It’s also the smallest country in continental Asia.

لبنان بلد تاريخي، موطن مدينتين من بين أقدم المدن في العالم. هناك مزيج كبير من التأثيرات من الشرق والغرب. كما أنه أصغر .بلد في آسيا القارية

Marhaba Beirut! Cloudflare’s 121st location - مرحبا بيروت! موقع “كلاودفلار” ال ١٢١
CC-BY-SA Gregor Rom

Lebanon’s connection to the Internet

Lebanon is a little different to most other countries when it comes to the internet, with all connectivity to the outside world flowing via a single network, Ogero. Traffic to Lebanon was previously served from our existing deployments in Marseille and Paris, due to where Ogero connects to the rest of the internet. By deploying locally in Beirut, round-trip latency is cut by around 50 milliseconds. This might seem like almost nothing, but it adds up when you factor in a DNS lookup and 3-way handshake required to open a TCP connection. Internet penetration in Lebanon according to different sources is around 75%, which is quite high. However, the speed available to end users is low, typically in single digit megabits per second.

The Ministry of Telecommunications has an ambitious plan to improve the connectivity available in Lebanon by 2020, a big part of this involves deploying fiber optic cabling to homes and businesses throughout the country. This will inevitably help to boost the level of traffic we see today coming from Lebanon. Comparing Lebanon to Denmark where the population is only a few thousand lower there is 7x more traffic served to Denmark than to Lebanon.

اتصال لبنان بالإنترنت لبنان يختلف قليلا عن معظم البلدان الأخرى عندما يتعلق الأمر بالإنترنت، فكل اتصال إلى العالم الخارجي يتدفق عبر شبكة واحدة، أوجيرو. كانت حركة مرور الانترنت إلى لبنان في السابق من عمليات النشر الحالية لدينا في مرسيليا وباريس، ويرجع ذلك إلى حيث تتصل أوجيرو ببقية الإنترنت. من خلال النشر محليا في بيروت، أصبح وقت الإستجابة ذهابا وإيابا أقل من 50 ميلي ثانية واحدة.

قد يبدو هذا لا شيء تقريبا، لكنه يصبح ذو معنى عندما تحسب بحث نظام أسماء النطاقات (DNS) ومصافحة ثلاثية الطرق المطلوبة لفتح بروتوكول التحكم بالإرسال .(TCP) ويبلغ انتشار الإنترنت في لبنان وفقا لمصادر مختلفة حوالي ٧٥%، وهو رقم مرتفع جدا. ومع ذلك، فإن السرعة المتاحة للمستخدمين منخفضة، وعادة تكون رقم مفرد من الميغابتس في الثانية الواحدة.

قامت وزارة الاتصالات بعرض خطة طموحة لتحسين الاتصال المتوفر في لبنان بحلول عام ٢٠٢٠، جزء كبير من هذا ينطوي على نشر كابلات الألياف البصرية للمنازل والشركات في جميع أنحاء البلد. وهذا سيساعد حتما على تعزيز مستوى حركة المرور التي نراها اليوم قادمة من لبنان.

Beirut IX

The Internet exchange in Beirut is no exception, with fibre access not possible in Lebanon, ISPs reach the IX by microwave. To give the best access from all around Beirut it is situated at the top of a hill. With most Internet exchanges, line of sight isn’t a concern as fibre is available. Our deployment connected to Beirut IX brings over 7 million websites closers to ISPs connected, making the Internet faster and safer for users in Lebanon.

(Beirut IX) تبادل الإنترنت في بيروت إن تبادل الإنترنت في بيروت ليس استثناء، إذ لا يوجد صلة للالياف الضوئية في لبنان، فإن مزودي خدمة الإنترنت يصلون إلى نقطة تبادل الإنترنت (IX) عن طريق موجات الميكرويف. لإعطاء أفضل وصول من جميع أنحاء بيروت أنها تقع في أعلى تلة. مع معظم تبادلات الإنترنت، خط الأفق ليس مصدر قلق بما أن صلة الألياف متاحة.

إن نشرنا المتصل بتبادل الإنترنت (IX) في بيروت يجلب أكثر من ٧ ملايين موقع على شبكة الإنترنت، مما يجعل الإنترنت أسرع وأكثر أمانا للمستخدمين في لبنان.

Thank you to Layal Jebran for the translation.

Categories: Technology

Bye Bye Blackbird

Tue, 13/02/2018 - 01:16
Bye Bye Blackbird

Bye Bye Blackbird
Courtesty of

As we have talked about repeatedly in this blog, we at Cloudflare are not fans of the behavior of patent trolls. They prey upon innovative companies using overly-broad patents in an attempt to bleed settlements out of their targets. When we were first sued by a patent troll called Blackbird Technologies last spring, we decided that we weren’t going along with their game by agreeing to a modest settlement in lieu of going through the considerable effort and expense of litigation. We decided to fight.

We’re happy to report that earlier today, the United States District Court for the Northern District of California dismissed the case that Blackbird brought against Cloudflare. In a two-page order (copied below) Judge Vince Chhabria noted that “[a]bstract ideas are not patentable” and then held that Blackbird’s attempted assertion of the patent “attempts to monopolize the abstract idea of monitoring a preexisting data stream between a server” and is invalid as a matter of law. That means that Blackbird loses no matter what the facts of the case would have been.

The court’s ruling comes in response to a preliminary motion filed by Cloudflare under Section 101 of the U.S. Patent Act. That section defines what sort of things can be patented. Such motions are generally referred to as “Alice” motions because the U.S. Supreme Court held in a 2014 case (Alice Corp. v. CLS Bank Int’l) that a two-part test could be used to determine patent eligibility based on whether something is more than merely an abstract idea or at least creates an inventive use for an abstract principle. The Alice test helps to determine whether something is patentable subject matter or an unpatentable fundamental concept. Judge Chhabria found that Blackbird’s ‘355 patent was too abstract to be patentable subject matter.

Before the court ever even considered Cloudflare’s actions, it found that the supposed innovation reflected in Blackbird’s patent was too abstract to have been protectable in the first place. This means that the case against Cloudflare could not continue, but further, that the patent is completely invalid and Blackbird cannot use it to sue ANYONE in the future.

All of this only confirms the position we’ve taken from the beginning with regard to the way that Blackbird and other patent trolls operate. Blackbird acquired an absurdly broad patent from an inventor that had apparently never attempted to turn that patent into a business that made products, hired people, or paid taxes. And Blackbird used that patent to harass at least three companies that are in the business of making products and contributing to the economy.

Blackbird still has a right to appeal the court’s order, and we’ll be ready to respond in case they do. We will also report back soon to review our related efforts under Project Jengo.

Bye Bye Blackbird

Bye Bye Blackbird

Bye Bye Blackbird

Categories: Technology

It’s Hard To Change The Keys To The Internet And It Involves Destroying HSM’s

Tue, 06/02/2018 - 22:33
It’s Hard To Change The Keys To The Internet And It Involves Destroying HSM’s

It’s Hard To Change The Keys To The Internet And It Involves Destroying HSM’sPhoto by Niko Soikkeli / Unsplash

The root of the DNS tree has been using DNSSEC to protect the zone content since 2010. DNSSEC is simply a mechanism to provide cryptographic signatures alongside DNS records that can be validated, i.e. prove the answer is correct and has not been tampered with. To learn more about why DNSSEC is important, you can read our earlier blog post.

Today, the root zone is signed with a 2048 bit RSA “Trust Anchor” key. This key is used to sign further keys and is used to establish the Chain of trust that exists in the public DNS at the moment.

With access to this root Trust Anchor, it would be possible to re-sign the DNS tree and tamper with the content of DNS records on any domain, implementing a man-in-the-middle DNS attack… without causing recursors and resolvers to consider the data invalid.

As explained in this blog the key is very well protected with eye scanners and fingerprint readers and fire-breathing dragons patrolling the gate (okay, maybe not dragons). Operationally though, the root zone uses two different keys, the mentioned Trust Anchor key (that is called the Key Signing Key or KSK for short) and the Zone Signing Key (ZSK).

The ZSK (Zone Signing Key) is used to generate signatures for all of the Resource Records (RRs) in a zone.

You can query for the DNSSEC signature (the RRSIG record) of “” using your friendly dig command.

$ dig +dnssec ;; QUESTION SECTION: ; IN A ;; ANSWER SECTION: 4 IN A 4 IN A 4 IN RRSIG A 13 3 5 20180207170906 20180205150906 35273 4W4mJXJRnd/wHnDyNo5minGvZY6hVNSXITnUI+pO6fzhnkpsEp1ko8K7 1PQ6r0s9SwLgrgfneqXyPs4b5X0YDw==

The two A records shown here can be cryptographically verified using the RRSIG and ZSK in the zone. The ZSK can itself be verified using the KSK, and so on… this continues upwards following the “chain of trust” until the root KSK is found.

The tool can be used to help visualize how this verification can be done for any domain on the internet, for example here is the trust chain for “”.

It’s Hard To Change The Keys To The Internet And It Involves Destroying HSM’s

To verify the RRSIG on “” we would need to cryptographically verify the signatures in reverse order on the diagram. First “”, then “com”, and finally “.” – the root zone.

If you are able to access the secret key that’s used to sign the root, it’s possible to trick resolvers into verifying a "forged" answer.

While this DNSSEC signing has been deployed on the root zone, for over seven years, there is one operation that has never been attempted: rolling the Key Signing Key. This means to generate a new key and update every part of DNS infrastructure on the internet that needs it, retiring the old one completely.

The ZSK (Zone Signing Key) has been rolled religiously every quarter since 2010, however rolling the Key Signing Key is a much scarier operation. If it goes wrong it could leave the root zone signing invalid, meaning a large part of the internet would not trust any of the content, effectively knocking DNS offline for validating resolvers. After DNSSEC was designed, a mechanism was devised for rolling out a new Key Signing Key in RFC5011, this operation is commonly known as the 5011 roll-over.

What is a KEY rollover?

All cryptographic keys have a life cycle that can represented by states:
Generated == the key is created but only the “owner” knows of its properties.
Published == the key has been made public either as a public key or a hash of it.
Active == the key is in use
Retired == the has been withdrawn from service but is still published
Revoked == they key has been marked as not to be trusted ever again.
Removed == taken out of publication

Different keys move through the states in different ways depending on the usage, for example some keys are never revoked, just removing them is sufficient, for example the root ZSK’s are never revoked. When rolled, the root KSK will pass through all states.

Why is the Root KSK different ?

For most keys used in DNS the trust is derived by a relationship between the parent zone and the child zone. The parent publishes a special record, the DS (Delegation Signer), that contains cryptographically strong binding to the actual key, a hash. The child has a DNSKEY RRset at the top of its zone that has at least one key that matches one of the DS records in the parent. To complete the chain of trust the DNSKEY RRset MUST be signed by that key.

The root zone has no parent, thus trust cannot be derived in the same way. Instead, validating resolvers must be configured with the root Trust Anchor. This anchor must be refreshed during a key rollover or the validating resolver will not trust anything it sees in the root zone after the old KSK (from 2010) is retired from service. The Trust Anchors can be updated in a number of ways, such as a manual update, a software update, or an in-band update. The preferred update mechanism is the previously mentioned in-band update mechanism RFC5011-roll.

The process outlined in RFC 5011 relies on two factors, first that the new key is published in the DNSKEY RRset – which is signed by the old KSK, and is kept there for at least a hold-down period of 30 days. Validating resolvers that follow the procedure will check frequently to see if there is a new KSK in the DNSKEY set. The new key can be trusted because it has been signed with a key that is already in service. When there is new key, it is placed in PendingAddition state If at any point one of the key’s in PendingAddition is removed from the DNSKEY set, the resolver will forget about it. This means that if the key were to appear again, it would start a new 30 day hold-down period.

After the key has been in PendingAddtional for 30 consecutive days it is accepted into Active state and will be trusted to sign the DNSKEY set for the root. From this point onwards, the new key can be used to sign the Zone Signing Key, and in turn the root zone content itself.

Why are we rolling the root key trust anchor?

There are two main reasons;

  1. The community wants to be a sure that the RFC5011 mechanism works in practice. Knowing this makes future rollovers possible, and less risky. Regular rollovers are something to be done as a matter of good key hygiene, like changing your password regularly
  2. Enables thinking about switching to different algorithms. RSA with a large key size is a strong algorithm, but using it causes DNS packets to be larger. There are other algorithms like the ones that Cloudflare uses that are based on elliptic curves have smaller keys but increased safety per bit. To switch to a new algorithm would require a new key.

Some people advocated rolling the key and changing the algorithm at the same time but that was deemed too risky. The right time to start talking about that is after the current roll concludes successfully.

What has happened so far?

ICANN started the rollover process last year. The new keys has been created and replicated to all the HSM’s (Hardware Security Modules) in the two facilities that ICANN operates. From now on we will use the terms KSK2010 (the old key) and KSK2017 (the new key).

Before starting the roll-over process, testing of RFC5011 implementations took place and most implementations reported success.

It’s Hard To Change The Keys To The Internet And It Involves Destroying HSM’s

The new key was published in DNS on July 11th 2017, thus the DNSKEY set now contains two KSKs. At that point the new key/KSK2017 has entered Published state. It was scheduled to become Active on October 11th 2017. Any validating resolver that has been operating for at least 30 days during the July 11-October 11 window should have placed the new Trust Anchor in “Active” state before October 11th. But sometimes things do not go according to plan.

One of the things that was put in place before the rollover was a way for resolvers to signal to authoritative servers what trust anchor the resolver trusts RFC8145. RFC8145 was only published in April 2017, thus during the KSK2017 key publication phase, only the latest version of Bind-9 supported it by default.

The mechanism works by resolvers periodically sending a query to the root nodes, with a query name formatted like “_ta-4a5c” or “_ta-4a5c-4f66”. The name contains HEX encoded versions of the Trust Anchor identifiers, 19036 and 20326 respectively. This at least allows root operators to estimate the % of resolvers that have implemented RFC8145 AND are aware of each Trust Anchor.

On September 29 ICANN postponed the roll based on evidence from the resolvers that sent in reports.
It was concerning that the latest and greatest version of Bind-9 in 4% of cases did not pick up the new Trust Anchor, this was explained in more detail in a DNS-OARC presentation. But this still leaves us with the question, why?

It is also important to note that although other implementations of RFC8145 did not enable it by default thus most of the reports were by Bind-9.

Rolling the KSK at this point would have resulted in the remaining resolvers not trusting the content of the root zone, ultimately breaking all DNS resolution through them.

Operational reality vs the protocol design

At Cloudflare we operate validating resolvers in all of our >120 data centers, and we monitored the adoption of trust anchors on a weekly basis, expecting everything to work correctly. After 6 weeks we noticed that things were not going right, some of the resolvers had picked up the new trust anchor and others had not accepted the new trust anchor even though more than enough time had passed.

First let’s look at the assumptions that RFC5011 makes.

  • The resolver is a long running process that understands time and and can keep state
  • The resolver has access to persistent writeable storage that will work across reboots.

In the protocol community we had worried about the first one a lot, for the second one we had identified two failure cases: machine configured from old read-only medium, and new machine takes over. Both were considered rare enough and operators would know to deal with those exceptions.

Turns out the second assumption in RFC5011 had more failure modes than the community expected.

For example in Bind-9, it originally had a hardcoded list of “trusted-keys”. Later when RFC5011 support was added the configuration option “managed-keys” was added. It looks like some installations while religiously updating the software never changed from the fixed configuration to the RFC5011 managed one. In this case the only recovery is to change the configuration, and in some cases the operator selected this operating mode assuming he/she would distribute a new configuration file during rollover, but the person may have left or forgotten.

Software that uses managed-keys operations (Bind-9, Unbound, Knot-resolver) uses a file to maintain state between restarts. BUT it is possible that the file is read-only and in that case managed-keys works just like trusted-keys. Why anyone would have a configuration like that is a good question? The interesting obersevation is that unless the implementation complains loudly about the read-only state, the operator is not likely to notice. The only recovery option here is to change the configuration so the trust anchor file can be written.

Software upgrades are another possible reason for not picking up the new trust anchor, but only if the file containing the Trust Anchor state is overwritten or lost. This can happen if the resolver machine has a disk replacement/reformat etc. but in this case the net effect is only slowing down the acceptance of the new trust anchor. This failure is visible as as KSK2017 spends more than 30 days in state “PendingAddition” but that is only visible if someone is looking.

Modern operating practices use “containers” that are spun up and down, in those cases there is no “persistent” storage. To avoid validation errors in this case the software installed must know about the new key or perform a key discovery upon startup like the unbound-anchor program performs for Unbound.

There are probably few other reasons where operations may cause the errors seen by the Trust Anchor Signaling.

Back to what happened at Cloudflare? In our case the issue was a combination of upgrades and container issues. We were upgrading software on all our nodes and our resolver processes were allocated to different computers. Our fix was to quickly upgrade to a software version that knew about the new trust anchor, so future restarts/migrations would not cause loss in trust.

What is next for the KSK rollover

ICANN has just asked for comments on restarting the rollover process and perform the roll on October 11th 2018.

What can you do to prepare for the key rollover?
If you operate a validating resolver, make sure you have the latest version of your vendors software, audit the configuration files and file permissions and check that your software supports both KSK2010 (key tag 19036) and KSK2017 (key tag 20326).

If you are a concerned end user right now there is nothing you can do, but the IETF is considering a proposal to allow remote trust anchor checking via queries. Hopefully this will be standardized soon and DNS resolver software vendors add support, but until then there is no testing possible by you.

If you speak languages other than English and you worry about your local operators should know about the DNSSEC Key Rollover failure modes, feel free to republish this blog or parts of it in your language.

HSM destruction at the next KSK ceremony Feb 7th 2018

Every quarter there is a new KSK signing ceremony where signatures for 3 months of use of the KSK are generated. February 6th 2018 is the next one and it will sign a DNSKEY set with both KSKs but only signed by KSK2010 . You can see the script for the ceremony here and you can even watch it online. But the fun part of this particular ceremony is the destruction of old HSM (Hardware Security Module), via some fancy contraption.

An HSM is a special kind of equipment that can store private keys and never leak them, and protects its secrets by erasing them when someone tries to access/tamper with the equipment. The secrets remain in the HSM as long as a non-replaceable battery lasts. The old KSK HSMs have a lifetime of 10 years and were made in late 2009 or early 2010 thus the battery is not designed to last much longer. Last year the private keys were safely and securly moved to newer models and the new machines have been in use for about a year. The final step of retiring the old machines is to destroy them during the ceremony, tune in to see how that is done.

Excited by working on cutting edge stuff? Or building systems at a scale where once-in-a-decade problems can get triggered every day? Then join our team.

Categories: Technology

How we made our page-load optimisations even faster

Fri, 02/02/2018 - 16:41
How we made our page-load optimisations even faster

In 2017 we made two of our web optimisation products - Mirage and Rocket Loader - even faster! Combined, these products speed up around 1.2 billion web-pages a week. The products are both around 5 years old, so there was a big opportunity to update them for the brave new world of highly-tuned browsers, HTTP2 and modern Javascript tooling. We measured a performance boost that, very roughly, will save visitors to sites on our network between 50-700ms. Visitors that see content faster have much higher engagement and lower bounce rates, as shown by studies like Google’s. This really adds up, representing a further saving of 380 years of loading time each year and a staggering 1.03 petabytes of data transfer!

How we made our page-load optimisations even faster
Cycling image Photo by Dimon Blr on Unsplash.

What Mirage and Rocket Loader do

Mirage and Rocket Loader both optimise the loading of a web page by reducing and deferring the number of assets the browser needs to request for it to complete HTML parsing and rendering on screen.

Mirage .has_images { overflow: auto; } @media (min-width: 600px) { .has_images p { float: left; width: 50%; } .has_images img { float: right; margin: 1em; width: 40% } }

With Mirage, users on slow mobile connections will be quickly shown a full page of content, using placeholder images with a low file-size which will load much faster. Without Mirage visitors on a slow mobile connection will have to wait a long time to download high-quality images. Since it’ll take a long time, they will perceive your website as slow:

How we made our page-load optimisations even faster

With Mirage visitors will see content much faster, will thus perceive that the content is loading quickly, and will be less likely to give up:

How we made our page-load optimisations even faster Rocket Loader

Browsers will not show content that until all the Javascript that might affect it has been loaded and run. This can mean users wait a significant time before seeing any content at all, even if that content is the only reason they’re on visiting the page!

How we made our page-load optimisations even faster

Rocket Loader transparently defers all Javascript execution until the rest of the page has loaded. This allows the browser to display the content the visitors are interested in as soon as possible.

How we made our page-load optimisations even faster How they work

Both of these products involve a two step process: first our optimizing proxy-server will rewrite customers’ HTML as it’s delivered, and then our on-page Javascript will attempt to optimise aspects of the page load. For instance, Mirage's server-side rewrites image tags as follows:

<!-- before --> <img src="/some-image.png"> <!-- after --> <img data-cfsrc="/some-image.png" style="display:none;visibility:hidden;">

Since browsers don't recognise data-cfsrc, the Mirage Javascript can control the whole process of loading these images. It uses this opportunity to intelligently load placeholder images on slow connections.

Rocket Loader uses a similar approach to de-prioritise Javascript during page load, allowing the browser to show visitors the content of the page sooner.

The problems

The Javascript for both products was written years ago, when ‘rollup’ brought to mind a poor lifestyle choice rather than an excellent build-tool. With the big changes we’ve seen in browsers, protocols, and JS, there were many opportunities to optimise.

Dynamically... slowing things down

Designed for the ecosystem of the time, both products were loaded by Cloudflare’s asynchronous-module-definition (AMD) loader, called CloudflareJS, which also bundled some shared libraries.

This meant the process of loading Mirage or Rocket Loader looked like:

  1. CFJS inserted in blocking script tag by server-side rewriter
  2. CFJS runs, and looks at some on-page config to decide at runtime whether to load Rocket/Mirage via AMD, inserting new script tags
  3. Rocket/Mirage are loaded and run
Fighting browsers

Dynamic loading meant the products could not benefit from optimisations present in modern browsers. Browsers now scan HTML as they receive it instead of waiting for it all to arrive, identifying and loading external resources like script tags as quickly as possible. This process is called preload scanning, and is one of the most important optimisations performed by the brower. Since we used dynamic code inside CFJS to load Mirage and Rocket Loader, we were preventing them from benefitting from the preload scanner.

To make matters worse, Rocket Loader was being dynamically inserted using that villain of the DOM API, document.write - a technique that creates huge performance problems. Understanding exactly why is involved, so I’ve created a diagram. Skim it, and refer back to it as you read the next paragraph:

How we made our page-load optimisations even faster

As said, using document.write to insert scripts is be particularly damaging to page load performance. Since the document.write that inserts the script is invisible to the preload scanner (even if the script is inline, which ours isn’t, preload scanning doesn’t even attempt to scan JS), at the instant it is inserted the browser will already be busy requesting resources the scanner found elsewhere in the page (other script tags, images etc). This matters because a browser encountering a non-deferred or asynchronous Javascript, like Rocket Loader, must block all further building of the DOM tree until that script is loaded and executed, to give the script a chance to modify the DOM. So Rocket Loader was being inserted at an instant in which it was going to be very slow to load, due to the backlog of requests from the preload scan, and therefore causes a very long delay until the DOM parser can resume!

Aside from this grave performance issue, it became more urgent to remove document.write when Chrome began to intervene against it in version 55 triggering a very interesting discussion. This intervention would sometimes prevent Rocket Loader from being inserted on slow 2G connections, stopping any other Javascript from loading at all!

Clearly, document.write needed to be extirpated!

Unused and over-general code

CFJS was authored as a shared library for Cloudflare client-side code, including the original Cloudflare app store. This meant it had quite a large set of APIs. Although both Mirage and Rocket Loader depended on some of them, the overlap was actually small. Since we've launched the new, shiny Cloudflare Apps, CFJS had no other important products dependant upon it.

A plan of action

Before joining Cloudflare in July this year, I had been working in TypeScript, a language with all the lovely new syntax of modern Javascript. Taking over multiple AMD, ES5-based projects using Gulp and Grunt was a bit of a shock. I really thought I'd written my last define(['writing', 'very-bug'], function(twice, prone) {}), but here I was in 2017 seeing it again!

So it was very tempting to do a big-bang rewrite and get back to playing with the new ECMAScript 2018 toys. However, I’ve been involved in enough rewrites to know they’re very rarely justified, and instead identified the highest priority changes we’d need to improve performance (though I admit I wrote a few git checkout -b typescript-version branches to vent).

So, the plan was:

  1. identify and inline the parts of CFJS used by Mirage and Rocket Loader
  2. produce a new version of the other dependencies of CFJS (our logo badge widget is actually hardcoded to point at CloudflareJS)
  3. switch from AMD to Rollup (and thus ECMAScript import syntax)

The decision to avoid making a new shared library may be surprising, especially as tree-shaking avoids some of the code-size overhead from unused parts of our dependencies. However, a little duplication seemed the lesser evil compared to cross-project dependencies given that:

  • the overlap in code used was small
  • over-general, library-style functions were part of why CFJS became too big in the first place
  • Rocket Loader has some exciting things in its future...

Sweating kilobytes out of the minified + Gzipped Javascript files is be a waste of time for most applications. However, in the context of code that'll be run literally millions of times in the time you read this article, it really pays off. This is a process we’ll be continuing in 2018.

Switching out AMD

Switching out Gulp, Grunt and AMD was a fairly mechanical process of replacing syntax like this:

define(['cloudflare/iterator', 'cloudflare/dom'], function(iterator, dom) { // ... return { Mirage: Mirage, }; })

with ECMAScript modules, ready for Rollup, like:

import * as iterator from './iterator'; import { isHighLatency } from './connection'; // ... export { Mirage } Post refactor weigh-in

Once the parts of CFJS used by the projects were inlined into the projects, we ended up with both Rocket and Mirage being slightly larger (all numbers minified + GZipped):

(function() { var x = ['Mirage old', 'Mirage new', 'Rocket Loader old', 'Rocket Loader new']; var trace1 = { x: x, y: [8019,13029,25151,31621], name: 'Main file', type: 'bar' }; var trace2 = { x: x, y: [21572,0,21572,0], name: 'Additional dependencies', type: 'bar' }; var data = [trace1, trace2]; var layout = {barmode: 'stack',plot_bgcolor: 'transparent', paper_bgcolor: 'transparent'}; Plotly.newPlot('sizeChart', data, layout); })();

So we made a significant file-size saving (about half a jQuery’s worth) vs the original file-size required to completely load either product.

New insertion flow

Before, our original insertion flow looked something like this:

// on page embed, injected into customers' pages <script> var cloudflare = { rocket: true, mirage: true }; </script> <script src="/cloudflare.min.js"></script>

Inside cloudflare.min.js we found the dynamic code that, once run, would kick off the requests for Mirage and Rocket Loader:

// cloudflare.min.js if(cloudflare.rocket) { require(“cloudflare/rocket”); }

Our approach is now far more browser friendly, roughly:

// on page embed <script> var cloudflare = { /* some config */ } </script> <script src="/mirage.min.js"></script> <script src="/rocket.min.js"></script>

If you compare the new insertion sequence diagram, you can see why this is so much better:

How we made our page-load optimisations even faster


Theory implied our smaller, browser-friendly strategy should be faster, but only by doing some good old empirical research would we know for sure.

To measure the results, I set up a representative test page (including Bootstrap, custom fonts, some images, text) and calculated the change in the average Lighthouse performance scores out of 100 over a number of runs. The
metrics I focussed on were:

  1. Time till first meaningful paint (TTFMP) - FMP is when we first see some useful content, e.g. images and text
  2. Overall - this is Lighthouse's aggregate score for a page - the closer to 100, the better
(function() { var x = ['Mirage','Rocket Loader']; var trace1 = { x: x, y: [93.4,86.6], marker: {color: '#F8B168'}, name: 'Lighthouse score, old', type: 'bar' }; var trace2 = { x: x, y: [93.4,92.9], marker: {color: '#FF7900'}, name: 'Lighthouse score, new', type: 'bar' }; var trace3 = { x: x, y: [88,72.2], marker: {color: '#2F7BBF'}, name: 'Lightscore FMP score, old', type: 'bar' }; var trace4 = { x: x, y: [89,86.8], marker: {color: '#63A1D7'}, name: 'Lightscore FMP score, new', type: 'bar' }; var data = [trace1, trace2, trace3, trace4]; var layout = {barmode: 'group',plot_bgcolor: 'transparent', paper_bgcolor: 'transparent',title:'Observed average Lighthouse scores (max 100) for FMP, and overall performance'}; Plotly.newPlot('lighthouse', data, layout); })(); (function() { var x = ['Mirage','Rocket Loader']; var trace1 = { x: x, y: [2195.3,2959.5], name: 'Old version, TTFMP ms', type: 'bar' }; var trace2 = { x: x, y: [2146.3,2265.5], name: 'New version, TTFMP ms', type: 'bar' }; var data = [trace1, trace2]; var layout = {barmode: 'group',plot_bgcolor: 'transparent', paper_bgcolor: 'transparent',title:'Observed average TTFMP in ms'}; Plotly.newPlot('ttfmp_chart', data, layout); })(); Assessment

So, improved metrics across the board! We can see the changes have resulted in solid improvements, e.g a reduction in our average time till first meaningful paint of 694ms for Rocket, and 49ms for Mirage.


The optimisations to Mirage and Rocket Loader have resulted in less bandwidth use, and measurably better performance for visitors to Cloudflare optimised sites.

Footnotes a:not([href]) { text-decoration: inherit !important; color: inherit !important; }
  1. The following are back-of-the-envelope calculations. Mirage gets 980 million requests a week, TTFMP reduction of 50ms. There are 1000 ms in a second * 60 seconds * 60 minutes * 24 hours * 365 days = 31.5 billion milliseconds in a year. So (980e6 * 50 * 52) / 31.5e9 = in aggregate, 81 years less waiting for first-paint. Rocket gets 270 million requests a week, average TTFMP reduction of 694ms, (270e6 * 694 * 52) / 31.5e9 = in aggregate, 301 years less waiting for first-meaningful-paint. Similarly 980 million savings of 16kb per week for Mirage = 817.60 terabytes per year and 270 million savings of 15.2kb per week for Rocket Loader = 213.79 terabytes per year for a combined total of 1031 terabytes or 1.031 petabytes.
  2. and a tiny 1.5KB file for our web badge - written in TypeScript ???? - which previously was loaded on top of the 21.6KB CFJS
  3. shut it Hume
  4. Thanks to Peter Belesis for doing the initial work of identifying which products depended upon CloudflareJS, and Peter, Matthew Cottingham, Andrew Galloni, Henry Heinemann, Simon Moore and Ivan Nikulin for their wise counsel on this blog post.
Categories: Technology

Coming soon to a university near you

Thu, 01/02/2018 - 22:04
Coming soon to a university near you

Attention software engineering students: Cloudflare is coming to the University of Illinois at Urbana-Champaign and the University of Wisconsin–Madison, and we want to meet you! We will be attending UW–Madison’s Career Connection on Wednesday, February 7 and UIUC’s Startup Career Fair on Thursday, February 8. We’ll also be hosting tech talks at UIUC on Friday, February 2 at 6:00pm in 2405 Siebel Center and at UW–Madison on Tuesday, February 6 (time and location coming soon).

Coming soon to a university near you
Cloudflare staff at YHack 2017. Photo courtesy Andrew Fitch.

Built in Champaign

In early 2016, Cloudflare opened an engineering office in Champaign, IL to build Argo Smart Routing. Champaign's proximity to the University of Illinois, one of the nation's top engineering schools, makes it an attractive place for high-tech companies to set up shop and for talented engineers to call home. Since graduating from UIUC in 2008, I've had opportunities to work on amazing software projects, growing technically and as a leader, all while enjoying the lifestyle benefits of Champaign (15 minute commute, anyone?).

Cloudflare has attended annual recruiting events at UIUC since the Champaign office was opened. This year, we've started to expand our search to other top engineering schools in the midwest. In the fall semester we attended a career fair at UW-Madison. We were impressed with the caliber of talent we saw which made it an easy decision to return. Our hope is to show students studying at universities in the midwest the opportunity to build a career right here, working on compelling projects like Argo.

Beyond the Great Plains

While we hope that many students will consider helping us build Argo in Champaign, Cloudflare has many open positions in all of our office locations, including San Francisco, London, and Austin, TX. If you're interested in a particular role or location, come talk to us at the career fairs and we'll help get you connected!

Not a student, but interested in working on Argo in Champaign? Apply here!

Categories: Technology

Cloudflare Workers is now on Open Beta

Thu, 01/02/2018 - 17:00
Cloudflare Workers is now on Open Beta

Cloudflare Workers Beta is now open!

Cloudflare Workers lets you run JavaScript on Cloudflare’s edge, deploying globally to over 120+ data centers around the world in less than 30 seconds. Your code can intercept and modify any request made to your website, make outbound requests to any URL on the Internet, and replace much of what you might need to configure your CDN to do today. Even better, it will do this from all our edge locations around the world, closer to many of your users than your origin servers can ever be. You will have a fully functional Turing-complete language in your fingertips which will allow you to build powerful applications on the edge. The only limit is your imagination.

Cloudflare Workers is now on Open Beta

To get started:

  • Sign in to your account on
  • Visit the Workers tab.
  • Launch the editor.
  • Write some code and save it.
  • Go to the routes tab and prescribe on what requests you want to run Workers for

That’s it!

You can start by writing a simple ‘hello world’ script, but chances are that you are going write Workers that are more complicated. You can check out our page with recipes to:

We will keep adding new recipes to our docs. All the recipes are in a Github repository; if you'd like to add your own, send us a pull request.

Check out the Workers Community to see what other people are building. Please share your feedback and questions!

Cloudflare Workers is completely free during the open beta. We do intend on charging for Workers, but we will notify you of our plans at least thirty days before any changes are made.

Categories: Technology

Rust 로 복잡한 매크로를 작성하기: 역폴란드 표기법​

Wed, 31/01/2018 - 12:13
 역폴란드 표기법​

This is a Korean translation of a prior post.

(이 글은 제 개인 블로그에 게시된 튜토리얼을 다시 올린 것입니다)

Rust에는 흥미로운 기능이 많지만 그중에도 강력한 매크로 시스템이 있습니다. 불행히도 The Book[1]과 여러가지 튜토리얼을 읽고 나서도 서로 다른 요소의 복잡한 리스트를 처리하는 매크로를 구현하려고 하면 저는 여전히 어떻게 만들어야 하는지를 이해하는데 힘들어 하며, 좀 시간이 지나서 머리속에 불이 켜지는 듯한 느낌이 들면 그제서야 이것저것 매크로를 마구 사용하기 시작 합니다. :) (맞아요, 난-매크로를-써요-왜냐하면-함수나-타입-지정이나-생명주기를-쓰고-싶어하지-않아서 처럼과 같은 이유는 아니지만 다른 사람들이 쓰는걸 봤었고 실제로 유용한 곳이라면 말이죠)

 역폴란드 표기법​
CC BY 2.0 image by Conor Lawless

그래서 이 글에서는 제가 생각하는 그런 매크로를 쓰는 데 필요한 원칙을 설명하고자 합니다. 이 글에서는 The Book의 매크로 섹션을 읽어 보았고 기본적인 매크로 정의와 토큰 타입에 대해 익숙하다고 가정하겠습니다.

이 튜토리얼에서는 역폴란드 표기법 (Reverse Polish Notation, RPN)을 예제로 사용합니다. 충분히 간단하기 때문에 흥미롭기도 하고, 학교에서 이미 배워서 익숙할 지도 모르고요. 하지만 컴파일 시간에 정적으로 구현하기 위해서는 재귀적인 매크로를 사용해야 할 것입니다.

역폴란드 표기법(후위 또는 후치 표기법으로 불리기도 합니다)은 모든 연산에 스택을 사용하므로 연산 대상을 스택에 넣고 [이진] 연산자는 연산 대상 두개를 스택에서 가져와서 결과를 평가하고 다시 스택에 넣습니다. 따라서 다음과 같은 식을 생각해 보면:

2 3 + 4 *

이 식은 다음과 같이 해석됩니다:

  1. 2를 스택에 넣음
  2. 3을 스택에 넣음
  3. 스택에서 두 값을 가져옴 (3과 2), 연산자 +를 적용하고 결과 (5)를 스택에 다시 넣음
  4. 4를 스택에 넣음
  5. 스택에서 마지막 두 값을 가져옴 (4와 5), 연산자 * 를 적용하고 (4 * 5) 결과 (20)을 스택에 다시 넣음
  6. 식의 끝. 스택에 들어 있는 결과값은 20 한가지.

수학에서 사용되고 대부분의 현대적인 프로그래밍 언어에서 사용되는 일반적인 중위 표기법에서는 (2 + 3) * 4 로 표현이 됩니다.

이제 역폴란드 표기법을 컴파일 시간에 평가하여 Rust가 이해할 수 있는 중위 표기법으로 변환해 주는 매크로를 작성해 보도록 합시다.

macro_rules! rpn { // TODO } println!("{}", rpn!(2 3 + 4 *)); // 20

스택에 숫자를 넣는 것 부터 시작해 봅시다.

매크로는 현재 문자에 매칭하는 것을 허용하지 않으므로 expr은 사용할 수 없는데 숫자 하나를 읽어들이기 보다는 2 + 3 ...와 같은 문자열과 매칭할 수 있기 때문입니다. 따라서 (문자/식별자/유효기간 등과 같은 기본 토큰 또는 더 많은 토큰을 포함하고 있는 ()/[]/{} 스타일의 괄호식) 토큰 트리를 하나만 읽어들일 수 있는 일반적인 토큰 매칭 기능인 tt를 사용하도록 하겠습니다.

macro_rules! rpn { ($num:tt) => { // TODO }; }

이제 스택을 위한 변수가 필요 합니다.

우리는 이 스택이 컴파일 시에만 존재하기를 원하므로 매크로는 실제 변수를 사용할 수 없습니다. 따라서 그 대신에 전달 가능한 별도의 토큰 열을 갖고 축적자 같이 사용되게 하는 트릭을 쓰도록 합니다.

이 경우, (간단한 숫자 뿐 아니라 중간 형태의 중위 표현식을 위해서도 사용할 것이므로) 스택을 쉼표로 분리된 expr 의 열로 표현하고 다른 입력에서 분리하기 위해 각괄호로 둘러싸도록 합시다:

macro_rules! rpn { ([ $($stack:expr),* ] $num:tt) => { // TODO }; }

여기서 토큰 열은 실제 변수는 아닙니다 - 내용을 바꾸거나 나중에 다른 일을 시킬 수는 없습니다. 대신에 이 토큰 열에 필요한 변경을 해서 새 복사본을 만들 수 있고 재귀적으로 같은 매크로를 다시 부를 수 있습니다.

함수형 언어의 기본 지식이 있거나 불변 데이터를 제공하는 라이브러리를 이용해 본 적이 있다면, 이런 접근 방법 - 변경된 복사본을 만드는 것으로 데이터를 변경하거나 재귀를 통해 리스트를 처리하는 - 은 이미 익숙할 것입니다:

macro_rules! rpn { ([ $($stack:expr),* ] $num:tt) => { rpn!([ $num $(, $stack)* ]) }; }

이제 분명한 것은 간단한 숫자만의 경우라면 별로 있을것 같지 않고 크게 흥미롭지도 않을 것이므로 그 숫자 이후의 0개나 그 이상의 tt 토큰을 찾도록 합니다. 이것은 추가적인 매칭과 처리를 위해 이 매크로의 다음번 호출에 전달될 수 있습니다:

macro_rules! rpn { ([ $($stack:expr),* ] $num:tt $($rest:tt)*) => { rpn!([ $num $(, $stack)* ] $($rest)*) }; }

이 시점에서는 아직 연산자를 지원하지 않습니다. 연산자는 어떻게 매칭해야 할까요?

우리의 RPN이 완전히 동일한 방법으로 처리하기를 원하는 토큰 열이라 한다면, 단순히 $($token:tt)* 처럼 리스트를 사용할 수 있습니다. 불행히도 이렇게 하면 리스트를 돌아보거나 연산 대상을 집어 넣거나 각 토큰에 따라 연산자를 적용하는 기능을 만들 수 없습니다.

The Book은 "매크로 시스템의 파싱은 명확해야 한다"라고 하고 있으며 이는 단일 매크로 가지에 있어서는 사실입니다 - +는 유효한 토큰이며 tt 그룹에 매칭이 될 수도 있으므로 $($num:tt)* + 와 같은 연산자 뒤에 오는 숫자의 열은 매칭할 수 없는데, 여기서 재귀적 매크로의 도움을 받을 수 있습니다.

여러분의 매크로 정의에 복수의 가지가 있다면 Rust는 이것들을 하나씩 시도하므로 숫자 처리 이전에 연산자 가지를 놓는 방식으로 충돌을 방지할 수 있습니다:

macro_rules! rpn { ([ $($stack:expr),* ] + $($rest:tt)*) => { // TODO }; ([ $($stack:expr),* ] - $($rest:tt)*) => { // TODO }; ([ $($stack:expr),* ] * $($rest:tt)*) => { // TODO }; ([ $($stack:expr),* ] / $($rest:tt)*) => { // TODO }; ([ $($stack:expr),* ] $num:tt $($rest:tt)*) => { rpn!([ $num $(, $stack)* ] $($rest)*) }; }

앞에서 이야기 했듯이 연산자는 스택의 마지막 두 숫자에 적용되므로 이것들은 별도로 매칭해서 그 결과를 "평가" 하고 (일반적인 중위 표현식을 구성) 다시 집어 넣습니다:

macro_rules! rpn { ([ $b:expr, $a:expr $(, $stack:expr)* ] + $($rest:tt)*) => { rpn!([ $a + $b $(, $stack)* ] $($rest)*) }; ([ $b:expr, $a:expr $(, $stack:expr)* ] - $($rest:tt)*) => { rpn!([ $a - $b $(, $stack)* ] $($rest)*) }; ([ $b:expr, $a:expr $(, $stack:expr)* ] * $($rest:tt)*) => { rpn!([ $a * $b $(,$stack)* ] $($rest)*) }; ([ $b:expr, $a:expr $(, $stack:expr)* ] / $($rest:tt)*) => { rpn!([ $a / $b $(,$stack)* ] $($rest)*) }; ([ $($stack:expr),* ] $num:tt $($rest:tt)*) => { rpn!([ $num $(, $stack)* ] $($rest)*) }; }

저는 이렇게 대놓고 반복하는 것을 그리 좋아하지 않습니다만 문자와 같이 연산자에 매칭하는 토큰 타입은 따로 없습니다.

하지만 할 수 있는 일은 평가를 담당하는 도우미를 추가 하고 명시적인 연산자 가지를 그쪽으로 위임하는 일입니다.

매크로에서는 외부 도우미를 사용할 수는 없지만 확실한 것은 여러분의 매크로가 이미 존재 한다는 것이므로, 사용할 수 있는 트릭은 유일한 토큰 열로 "표식"이 되어 있는 동일 매크로에 가지를 만들고 일반 가지에서 했던 것 처럼 재귀적으로 호출하는 것입니다.

@op를 그런 표식으로 사용하고 그 안에서 tt를 통한 어떤 연산자라도 받아들이도록 합니다(우리는 연산자만을 이 도우미에게 전달하므로 이러한 문맥에서는 tt는 애매한 점이 없습니다).

그리고 스택은 더 이상 각각의 개별 가지에서 확장될 필요가 없습니다 - 앞에서 [] 으로 둘러 쌓았기 때문에 또 다른 토큰 나무(tt)로서 매칭될 수 있고 이후에 도우미에게 전달이 됩니다:

macro_rules! rpn { (@op [ $b:expr, $a:expr $(, $stack:expr)* ] $op:tt $($rest:tt)*) => { rpn!([ $a $op $b $(, $stack)* ] $($rest)*) }; ($stack:tt + $($rest:tt)*) => { rpn!(@op $stack + $($rest)*) }; ($stack:tt - $($rest:tt)*) => { rpn!(@op $stack - $($rest)*) }; ($stack:tt * $($rest:tt)*) => { rpn!(@op $stack * $($rest)*) }; ($stack:tt / $($rest:tt)*) => { rpn!(@op $stack / $($rest)*) }; ([ $($stack:expr),* ] $num:tt $($rest:tt)*) => { rpn!([ $num $(, $stack)* ] $($rest)*) }; }

이제 어떤 토큰이라도 해당하는 가지에서 처리가 되므로 스택에 하나의 아이템만 있고 다른 토큰이 없는 마지막 경우만 처리하면 됩니다:

macro_rules! rpn { // ... ([ $result:expr ]) => { $result }; }

이 시점에서 빈 스택과 RPN 표현식으로 이 매크로를 실행 하면 이미 제대로 된 결과를 만들어 냅니다:


println!("{}", rpn!([] 2 3 + 4 *)); // 20

하지만 이 스택은 내부적인 구현 사항이라 사용자가 매번 빈 스택을 전달하도록 하기를 바라지는 않으므로, 시작 지점으로 사용될 수 있고 []을 자동적으로 추가해 주는 가지를 하나 더 추가합니다:


macro_rules! rpn { // ... ($($tokens:tt)*) => { rpn!([] $($tokens)*) }; } println!("{}", rpn!(2 3 + 4 *)); // 20

이제 우리의 매크로는 위키피디아의 RPN페이지에 있는 복잡한 표현식 예제도 잘 처리 합니다!

println!("{}", rpn!(15 7 1 1 + - / 3 * 2 1 1 + + -)); // 5 오류 처리

이제 올바른 RPN 표현식에 대해서는 모든것이 잘 되는 것 같습니다만, 실제 사용 가능한 매크로가 되려면 잘못된 입력도 잘 처리해서 적절한 오류 메시지를 표시하도록 해야 합니다.

일단 중간에 숫자 하나를 넣어서 어떻게 되는지 보도록 하면:

println!("{}", rpn!(2 3 7 + 4 *));


error[E0277]: the trait bound `[{integer}; 2]: std::fmt::Display` is not satisfied --> src/ | 36 | println!("{}", rpn!(2 3 7 + 4 *)); | ^^^^^^^^^^^^^^^^^ `[{integer}; 2]` cannot be formatted with the default formatter; try using `:?` instead if you are using a format string | = help: the trait `std::fmt::Display` is not implemented for `[{integer}; 2]` = note: required by `std::fmt::Display::fmt`

괜찮은 것 같지만 표현식 내의 실제 오류에 대한 정보는 제공하지 않으므로 유용해 보이지는 않습니다.

어떤 일이 일어나는지 알기 위해서는 매크로를 디버깅해야 합니다. 이를 위해서 trace_macros 기능을 사용하도록 합니다(다른 부가적인 컴파일러 기능처럼 Rust의 일일 빌드가 필요할 것입니다). println! 을 추적하고 싶은 건 아니므로 RPN 표현식을 변수로 분리 합니다:


#![feature(trace_macros)] macro_rules! rpn { /* ... */ } fn main() { trace_macros!(true); let e = rpn!(2 3 7 + 4 *); trace_macros!(false); println!("{}", e); }

이제 출력을 보면 이 매크로가 단계별로 어떻게 재귀적으로 평가되는지 알 수 있습니다:

note: trace_macro --> src/ | 39 | let e = rpn!(2 3 7 + 4 *); | ^^^^^^^^^^^^^^^^^ | = note: expanding `rpn! { 2 3 7 + 4 * }` = note: to `rpn ! ( [ ] 2 3 7 + 4 * )` = note: expanding `rpn! { [ ] 2 3 7 + 4 * }` = note: to `rpn ! ( [ 2 ] 3 7 + 4 * )` = note: expanding `rpn! { [ 2 ] 3 7 + 4 * }` = note: to `rpn ! ( [ 3 , 2 ] 7 + 4 * )` = note: expanding `rpn! { [ 3 , 2 ] 7 + 4 * }` = note: to `rpn ! ( [ 7 , 3 , 2 ] + 4 * )` = note: expanding `rpn! { [ 7 , 3 , 2 ] + 4 * }` = note: to `rpn ! ( @ op [ 7 , 3 , 2 ] + 4 * )` = note: expanding `rpn! { @ op [ 7 , 3 , 2 ] + 4 * }` = note: to `rpn ! ( [ 3 + 7 , 2 ] 4 * )` = note: expanding `rpn! { [ 3 + 7 , 2 ] 4 * }` = note: to `rpn ! ( [ 4 , 3 + 7 , 2 ] * )` = note: expanding `rpn! { [ 4 , 3 + 7 , 2 ] * }` = note: to `rpn ! ( @ op [ 4 , 3 + 7 , 2 ] * )` = note: expanding `rpn! { @ op [ 4 , 3 + 7 , 2 ] * }` = note: to `rpn ! ( [ 3 + 7 * 4 , 2 ] )` = note: expanding `rpn! { [ 3 + 7 * 4 , 2 ] }` = note: to `rpn ! ( [ ] [ 3 + 7 * 4 , 2 ] )` = note: expanding `rpn! { [ ] [ 3 + 7 * 4 , 2 ] }` = note: to `rpn ! ( [ [ 3 + 7 * 4 , 2 ] ] )` = note: expanding `rpn! { [ [ 3 + 7 * 4 , 2 ] ] }` = note: to `[(3 + 7) * 4, 2]`

잘 따라가 보면 다음 단계가 문제라는 것을 알 수 있습니다:

= note: expanding `rpn! { [ 3 + 7 * 4 , 2 ] }` = note: to `rpn ! ( [ ] [ 3 + 7 * 4 , 2 ] )`

[ 3 + 7 * 4 , 2 ]는 ([$result:expr]) => ... 가지에 최종 표현식으로 매칭되지 않으므로 그 대신 마지막 가지인 ($($tokens:tt)*) => ... 에 매칭이 되므로 빈 스택 []이 앞에 추가 되고서 원래의 [ 3 + 7 * 4 , 2 ]는 일반적인 $num:tt에 매칭되어 단일 최종 값으로 스택에 들어가게 됩니다.

이런 일을 방지하기 위해서 어떤 스택과도 매칭이 되는 마지막 두 가지 시이에 가지를 하나 더 추가하도록 합니다.

이것은 토큰이 다 떨어졌을 때만 해당할 것이지만 스택에는 최종값 하나만 들어있지 않을 것이므로 이를 컴파일 오류로 취급하고 내장된 compile_error! 매크로를 사용해서 적절한 오류 메시지를 출력 하도록 합니다.

주의할 것은 이 문맥에서는 메시지 문자열을 만들기 위해서 런타임 API를 사용하는 format!을 사용할 수 없으므로 그 대신에 메시지를 만들어 내기 위해서 내장된 concat!과 stringify! 매크로를 사용 합니다.


macro_rules! rpn { // ... ([ $result:expr ]) => { $result }; ([ $($stack:expr),* ]) => { compile_error!(concat!( "Could not find final value for the expression, perhaps you missed an operator? Final stack: ", stringify!([ $($stack),* ]) )) }; ($($tokens:tt)*) => { rpn!([] $($tokens)*) }; }

이제 좀 더 의미 있는 오류 메시지가 되었고 적어도 현재의 평가 상태에 대해 약간의 상세한 정보를 제공 합니다:

error: Could not find final value for the expression, perhaps you missed an operator? Final stack: [ (3 + 7) * 4 , 2 ] --> src/ | 31 | compile_error!(concat!("Could not find final value for the expression, perhaps you missed an operator? Final stack: ", stringify!([$($stack),*]))) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ... 40 | println!("{}", rpn!(2 3 7 + 4 *)); | ----------------- in this macro invocation

하지만 숫자가 없다면 어떻게 될까요?


println!("{}", rpn!(2 3 + *));

불행히도 이건 큰 도움이 되지는 않습니다:

error: expected expression, found `@` --> src/ | 15 | rpn!(@op $stack * $($rest)*) | ^ ... 40 | println!("{}", rpn!(2 3 + *)); | ------------- in this macro invocation

만약 trace_macros를 사용하려 하는데 이유가 있어 스택을 보여주지 않는다고 해도, 어떻게 되고 있는지는 비교적 명확합니다 - @op는 매칭되는 조건이 매우 구체적입니다(스택에서 최소 두개의 값이 있어야 합니다). 그렇지 않을 경우 @는 더 탐욕적인 $num:tt에 매칭이 되어 스택에 들어 갑니다.

이걸 피하기 위해서는 아직 매칭되지 않은 @op로 시작하는 모든 것에 매칭하는 가지를 하나 더 추가 하고 컴파일 오류를 만들어 냅니다:


macro_rules! rpn { (@op [ $b:expr, $a:expr $(, $stack:expr)* ] $op:tt $($rest:tt)*) => { rpn!([ $a $op $b $(, $stack)* ] $($rest)*) }; (@op $stack:tt $op:tt $($rest:tt)*) => { compile_error!(concat!( "Could not apply operator `", stringify!($op), "` to the current stack: ", stringify!($stack) )) }; // ... }

다시한번 해 봅시다:

error: Could not apply operator `*` to the current stack: [ 2 + 3 ] --> src/ | 9 | compile_error!(concat!("Could not apply operator ", stringify!($op), " to current stack: ", stringify!($stack))) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ... 46 | println!("{}", rpn!(2 3 + *)); | ------------- in this macro invocation

훨씬 낫네요! 이제 우리의 매크로는 컴파일 시간에 어떤 RPN도 평가할 수 있고 대부분의 흔한 실수를 잘 처리할 수 있습니다. 이제 여기까지 하도록 하고 실제 사용 가능하다고 해 두지요. :)

추가할 작은 기능은 더 많지만, 이 데모 튜토리얼의 범위를 넘어서는 것으로 하겠습니다.

이 글이 유용했는지, 또는 알고 싶은 주제가 있다면 트위터로 알려 주세요!

  1. (역주) The Rust Programming Language를 가리키는 말입니다 ↩︎

Categories: Technology

Writing complex macros in Rust: Reverse Polish Notation

Wed, 31/01/2018 - 12:11

(This is a crosspost of a tutorial originally published on my personal blog)

Among other interesting features, Rust has a powerful macro system. Unfortunately, even after reading The Book and various tutorials, when it came to trying to implement a macro which involved processing complex lists of different elements, I still struggled to understand how it should be done, and it took some time till I got to that "ding" moment and started misusing macros for everything :) (ok, not everything as in the i-am-using-macros-because-i-dont-want-to-use-functions-and-specify-types-and-lifetimes everything like I've seen some people do, but anywhere it's actually useful)

Rust with a macro lens
CC BY 2.0 image by Conor Lawless

So, here is my take on describing the principles behind writing such macros. It assumes you have read the Macros section from The Book and are familiar with basic macros definitions and token types.

I'll take a Reverse Polish Notation as an example for this tutorial. It's interesting because it's simple enough, you might be already familiar with it from school, and yet to implement it statically at compile time, you already need to use a recursive macros approach.

Reverse Polish Notation (also called postfix notation) uses a stack for all its operations, so that any operand is pushed onto the stack, and any [binary] operator takes two operands from the stack, evaluates the result and puts it back. So an expression like following:

2 3 + 4 *

translates into:

  1. Put 2 onto the stack.
  2. Put 3 onto the stack.
  3. Take two last values from the stack (3 and 2), apply operator + and put the result (5) back onto the stack.
  4. Put 4 onto the stack.
  5. Take two last values from the stack (4 and 5), apply operator * (4 * 5) and put the result (20) back onto the stack.
  6. End of expression, the single value on the stack is the result (20).

In a more common infix notation, used in math and most modern programming languages, the expression would look like (2 + 3) * 4.

So let's write a macro that would evaluate RPN at compile-time by converting it into an infix notation that Rust understands.

macro_rules! rpn { // TODO } println!("{}", rpn!(2 3 + 4 *)); // 20

Let's start with pushing numbers onto the stack.

Macros currently don't allow matching literals, and expr won't work for us because it can accidentally match sequence like 2 + 3 ... instead of taking just a single number, so we'll resort to tt - a generic token matcher that matches only one token tree (whether it's a primitive token like literal/identifier/lifetime/etc. or a ()/[]/{}-parenthesized expression containing more tokens):

macro_rules! rpn { ($num:tt) => { // TODO }; }

Now, we'll need a variable for the stack.

Macros can't use real variables, because we want this stack to exist only at compile time. So, instead, the trick is to have a separate token sequence that can be passed around, and so used as kind of an accumulator.

In our case, let's represent it as a comma-separated sequence of expr (since we will be using it not only for simple numbers but also for intermediate infix expressions) and wrap it into brackets to separate from the rest of the input:

macro_rules! rpn { ([ $($stack:expr),* ] $num:tt) => { // TODO }; }

Now, a token sequence is not really a variable - you can't modify it in-place and do something afterwards. Instead, you can create a new copy of this token sequence with necessary modifications, and recursively call same macro again.

If you are coming from functional language background or worked with any library providing immutable data before, both of these approaches - mutating data by creating a modified copy and processing lists with a recursion - are likely already familiar to you:

macro_rules! rpn { ([ $($stack:expr),* ] $num:tt) => { rpn!([ $num $(, $stack)* ]) }; }

Now, obviously, the case with just a single number is rather unlikely and not very interesting to us, so we'll need to match anything else after that number as a sequence of zero or more tt tokens, which can be passed to next invocation of our macro for further matching and processing:

macro_rules! rpn { ([ $($stack:expr),* ] $num:tt $($rest:tt)*) => { rpn!([ $num $(, $stack)* ] $($rest)*) }; }

At this point we're still missing operator support. How do we match operators?

If our RPN would be a sequence of tokens that we would want to process in an exactly same way, we could simply use a list like $($token:tt)*. Unfortunately, that wouldn't give us an ability to go through list and either push an operand or apply an operator depending on each token.

The Book says that "macro system does not deal with parse ambiguity at all", and that's true for a single macros branch - we can't match a sequence of numbers followed by an operator like $($num:tt)* + because + is also a valid token and could be matched by the tt group, but this is where recursive macros helps again.

If you have different branches in your macro definition, Rust will try them one by one, so we can put our operator branches before the numeric one and, this way, avoid any conflict:

macro_rules! rpn { ([ $($stack:expr),* ] + $($rest:tt)*) => { // TODO }; ([ $($stack:expr),* ] - $($rest:tt)*) => { // TODO }; ([ $($stack:expr),* ] * $($rest:tt)*) => { // TODO }; ([ $($stack:expr),* ] / $($rest:tt)*) => { // TODO }; ([ $($stack:expr),* ] $num:tt $($rest:tt)*) => { rpn!([ $num $(, $stack)* ] $($rest)*) }; }

As I said earlier, operators are applied to the last two numbers on the stack, so we'll need to match them separately, "evaluate" the result (construct a regular infix expression) and put it back:

macro_rules! rpn { ([ $b:expr, $a:expr $(, $stack:expr)* ] + $($rest:tt)*) => { rpn!([ $a + $b $(, $stack)* ] $($rest)*) }; ([ $b:expr, $a:expr $(, $stack:expr)* ] - $($rest:tt)*) => { rpn!([ $a - $b $(, $stack)* ] $($rest)*) }; ([ $b:expr, $a:expr $(, $stack:expr)* ] * $($rest:tt)*) => { rpn!([ $a * $b $(,$stack)* ] $($rest)*) }; ([ $b:expr, $a:expr $(, $stack:expr)* ] / $($rest:tt)*) => { rpn!([ $a / $b $(,$stack)* ] $($rest)*) }; ([ $($stack:expr),* ] $num:tt $($rest:tt)*) => { rpn!([ $num $(, $stack)* ] $($rest)*) }; }

I'm not really fan of such obvious repetitions, but, just like with literals, there is no special token type to match operators.

What we can do, however, is add a helper that would be responsible for the evaluation, and delegate any explicit operator branch to it.

In macros, you can't really use an external helper, but the only thing you can be sure about is that your macros is already in scope, so the usual trick is to have a branch in the same macro "marked" with some unique token sequence, and call it recursively like we did in regular branches.

Let's use @op as such marker, and accept any operator via tt inside it (tt would be unambiguous in such context because we'll be passing only operators to this helper).

And the stack does not need to be expanded in each separate branch anymore - since we wrapped it into [] brackets earlier, it can be matched as any another token tree (tt), and then passed into our helper:

macro_rules! rpn { (@op [ $b:expr, $a:expr $(, $stack:expr)* ] $op:tt $($rest:tt)*) => { rpn!([ $a $op $b $(, $stack)* ] $($rest)*) }; ($stack:tt + $($rest:tt)*) => { rpn!(@op $stack + $($rest)*) }; ($stack:tt - $($rest:tt)*) => { rpn!(@op $stack - $($rest)*) }; ($stack:tt * $($rest:tt)*) => { rpn!(@op $stack * $($rest)*) }; ($stack:tt / $($rest:tt)*) => { rpn!(@op $stack / $($rest)*) }; ([ $($stack:expr),* ] $num:tt $($rest:tt)*) => { rpn!([ $num $(, $stack)* ] $($rest)*) }; }

Now any tokens are processed by corresponding branches, and we need to just handle final case when stack contains a single item, and no more tokens are left:

macro_rules! rpn { // ... ([ $result:expr ]) => { $result }; }

At this point, if you invoke this macro with an empty stack and RPN expression, it will already produce a correct result:


println!("{}", rpn!([] 2 3 + 4 *)); // 20

However, our stack is an implementation detail and we really wouldn't want every consumer to pass an empty stack in, so let's add another catch-all branch in the end that would serve as an entry point and add [] automatically:


macro_rules! rpn { // ... ($($tokens:tt)*) => { rpn!([] $($tokens)*) }; } println!("{}", rpn!(2 3 + 4 *)); // 20

Our macro even works for more complex expressions, like the one from Wikipedia page about RPN!

println!("{}", rpn!(15 7 1 1 + - / 3 * 2 1 1 + + -)); // 5 Error handling

Now everything seems to work smoothly for correct RPN expressions, but for a macros to be production-ready we need to be sure that it can handle invalid input as well, with a reasonable error message.

First, let's try to insert another number in the middle and see what happens:

println!("{}", rpn!(2 3 7 + 4 *));


error[E0277]: the trait bound `[{integer}; 2]: std::fmt::Display` is not satisfied --> src/ | 36 | println!("{}", rpn!(2 3 7 + 4 *)); | ^^^^^^^^^^^^^^^^^ `[{integer}; 2]` cannot be formatted with the default formatter; try using `:?` instead if you are using a format string | = help: the trait `std::fmt::Display` is not implemented for `[{integer}; 2]` = note: required by `std::fmt::Display::fmt`

Okay, that definitely doesn't look helpful as it doesn't provide any information relevant to the actual mistake in the expression.

In order to figure out what happened, we will need to debug our macros. For that, we'll use a trace_macros feature (and, like for any other optional compiler feature, you'll need a nightly version of Rust). We don't want to trace println! call, so we'll separate our RPN calculation to a variable:


#![feature(trace_macros)] macro_rules! rpn { /* ... */ } fn main() { trace_macros!(true); let e = rpn!(2 3 7 + 4 *); trace_macros!(false); println!("{}", e); }

In the output we'll now see how our macro is being recursively evaluated step by step:

note: trace_macro --> src/ | 39 | let e = rpn!(2 3 7 + 4 *); | ^^^^^^^^^^^^^^^^^ | = note: expanding `rpn! { 2 3 7 + 4 * }` = note: to `rpn ! ( [ ] 2 3 7 + 4 * )` = note: expanding `rpn! { [ ] 2 3 7 + 4 * }` = note: to `rpn ! ( [ 2 ] 3 7 + 4 * )` = note: expanding `rpn! { [ 2 ] 3 7 + 4 * }` = note: to `rpn ! ( [ 3 , 2 ] 7 + 4 * )` = note: expanding `rpn! { [ 3 , 2 ] 7 + 4 * }` = note: to `rpn ! ( [ 7 , 3 , 2 ] + 4 * )` = note: expanding `rpn! { [ 7 , 3 , 2 ] + 4 * }` = note: to `rpn ! ( @ op [ 7 , 3 , 2 ] + 4 * )` = note: expanding `rpn! { @ op [ 7 , 3 , 2 ] + 4 * }` = note: to `rpn ! ( [ 3 + 7 , 2 ] 4 * )` = note: expanding `rpn! { [ 3 + 7 , 2 ] 4 * }` = note: to `rpn ! ( [ 4 , 3 + 7 , 2 ] * )` = note: expanding `rpn! { [ 4 , 3 + 7 , 2 ] * }` = note: to `rpn ! ( @ op [ 4 , 3 + 7 , 2 ] * )` = note: expanding `rpn! { @ op [ 4 , 3 + 7 , 2 ] * }` = note: to `rpn ! ( [ 3 + 7 * 4 , 2 ] )` = note: expanding `rpn! { [ 3 + 7 * 4 , 2 ] }` = note: to `rpn ! ( [ ] [ 3 + 7 * 4 , 2 ] )` = note: expanding `rpn! { [ ] [ 3 + 7 * 4 , 2 ] }` = note: to `rpn ! ( [ [ 3 + 7 * 4 , 2 ] ] )` = note: expanding `rpn! { [ [ 3 + 7 * 4 , 2 ] ] }` = note: to `[(3 + 7) * 4, 2]`

If we carefully look through the trace, we'll notice that the problem originates in these steps:

= note: expanding `rpn! { [ 3 + 7 * 4 , 2 ] }` = note: to `rpn ! ( [ ] [ 3 + 7 * 4 , 2 ] )`

Since [ 3 + 7 * 4 , 2 ] was not matched by ([$result:expr]) => ... branch as a final expression, it was caught by our final catch-all ($($tokens:tt)*) => ... branch instead, prepended with an empty stack [] and then the original [ 3 + 7 * 4 , 2 ] was matched by generic $num:tt and pushed onto the stack as a single final value.

In order to prevent this from happening, let's insert another branch between these last two that would match any stack.

It would be hit only when we ran out of tokens, but stack didn't have exactly one final value, so we can treat it as a compile error and produce a more helpful error message using a built-in compile_error! macro.

Note that we can't use format! in this context since it uses runtime APIs to format a string, and instead we'll have to limit ourselves to built-in concat! and stringify! macros to format a message:


macro_rules! rpn { // ... ([ $result:expr ]) => { $result }; ([ $($stack:expr),* ]) => { compile_error!(concat!( "Could not find final value for the expression, perhaps you missed an operator? Final stack: ", stringify!([ $($stack),* ]) )) }; ($($tokens:tt)*) => { rpn!([] $($tokens)*) }; }

The error message is now more meaningful and contains at least some details about current state of evaluation:

error: Could not find final value for the expression, perhaps you missed an operator? Final stack: [ (3 + 7) * 4 , 2 ] --> src/ | 31 | compile_error!(concat!("Could not find final value for the expression, perhaps you missed an operator? Final stack: ", stringify!([$($stack),*]))) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ... 40 | println!("{}", rpn!(2 3 7 + 4 *)); | ----------------- in this macro invocation

But what if, instead, we miss some number?


println!("{}", rpn!(2 3 + *));

Unfortunately, this one is still not too helpful:

error: expected expression, found `@` --> src/ | 15 | rpn!(@op $stack * $($rest)*) | ^ ... 40 | println!("{}", rpn!(2 3 + *)); | ------------- in this macro invocation

If you try to use trace_macros, even it won't expand the stack here for some reason, but, luckily, it's relatively clear what's going on - @op has very specific conditions as to what should be matched (it expects at least two values on the stack), and, when it can't, @ gets matched by the same way-too-greedy $num:tt and pushed onto the stack.

To avoid this, again, we'll add another branch to match anything starting with @op that wasn't matched already, and produce a compile error:


macro_rules! rpn { (@op [ $b:expr, $a:expr $(, $stack:expr)* ] $op:tt $($rest:tt)*) => { rpn!([ $a $op $b $(, $stack)* ] $($rest)*) }; (@op $stack:tt $op:tt $($rest:tt)*) => { compile_error!(concat!( "Could not apply operator `", stringify!($op), "` to the current stack: ", stringify!($stack) )) }; // ... }

Let's try again:

error: Could not apply operator `*` to the current stack: [ 2 + 3 ] --> src/ | 9 | compile_error!(concat!("Could not apply operator ", stringify!($op), " to current stack: ", stringify!($stack))) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ... 46 | println!("{}", rpn!(2 3 + *)); | ------------- in this macro invocation

Much better! Now our macro can evaluate any RPN expression at compile-time, and gracefully handles most common mistakes, so let's call it a day and say it's production-ready :)

There are many more small improvements we could add, but I'd like to leave them outside this demonstration tutorial.

Feel free to let me know if this has been useful and/or what topics you'd like to see better covered on Twitter!

Categories: Technology

SEO Performance in 2018 Using Cloudflare

Sun, 28/01/2018 - 15:00
SEO Performance in 2018 Using Cloudflare

For some businesses SEO is a bad word, and for good reason. Google and other search engines keep their algorithms a well-guarded secret making SEO implementation not unlike playing a game where the referee won’t tell you all the rules. While SEO experts exist, the ambiguity around search creates an opening for grandiose claims and misinformation by unscrupulous profiteers claiming expertise.

If you’ve done SEO research, you may have come across an admixture of legitimate SEO practices, outdated optimizations, and misguided advice. You might have read that using the keyword meta tag in your HTML will help your SEO (it won’t), that there’s a specific number of instances a keyword should occur on a webpage (there isn’t), or that buying links will improve your rankings (it likely won’t and will get the site penalized). Let’s sift through the noise and highlight some dos and don’ts for performance-based SEO in 2018.

SEO is dead, long live SEO!

Nearly every year since its inception, SEO is declared dead. It is true that the scope of best practices for search engines has narrowed over the years as search engines have become smarter, and much of the benefit from SEO can be experienced by following these two rules:

  1. Create good content
  2. Don’t be creepy

Beyond the fairly obvious, there are a number of tactics that can help improve the importance with which a website is evaluated inside Google, Bing and others. This blog will focus on optimizing for Google, though the principles and practices likely apply to all search engines.

Does using Cloudflare hurt my SEO?

The short answer is, no. When asked whether or not Cloudflare can damage search rankings, John Mueller from Google stated CDNs can work great for both users and search engines when properly configured. This is consistent with our findings at Cloudflare, as we have millions of web properties, including SEO agencies, who use our service to improve both performance and SEO.

Can load time affect a sites SEO ranking?

Yes, it can. Since at least 2010, Google has publicly stated that site speed affects your Google ranking. While most sites at that time were not affected, times have changed and heavier sites with frontend frameworks, images, CMS platforms and/or a slew of other javascript dependencies are the new normal. Google promotes websites that result in a good user experience, and slow sites are frustrating and penalized in rankings as a result.

The cost of slow websites on user experience is particularly dramatic in mobile, where limited bandwidth results in further constraints. Aside from low search rankings, slow loading sites result in bad outcomes; research by Google indicates 53% of mobile sites are abandoned if load time is more than 3 seconds. Separate research from Google using a deep neural network found that as a mobile site’s load time goes from 1 to 7 seconds, the probability of a visitor bouncing increases 113%. The problems surrounding page speed increase the longer a site takes to load; mobile sites that load in 5 seconds earn 2x more ad revenue than those that take 19 seconds to load (the average time to completely load a site on a 3G connection).

What tools can I use to evaluate my site's performance?

A number of free and verified tools are available for checking a website’s performance. Based on Google’s research, you can estimate the number of visitors you will lose due to excessive loading time on mobile. Not to sound click baitey, but the results may surprise you.

As more web traffic continues to shift to mobile, mobile optimization must be prioritized for most websites. Google has announced that in July 2018 mobile speed will also affect SEO placement. If you want to do more research on your site’s overall mobile readiness, you can check to see if your site is mobile friendly.

If you’re technically-minded and use Chrome, you can pop into the Chrome devtools and click on the audits tab to access Lighthouse, Chrome’s built in analysis tool.
SEO Performance in 2018 Using Cloudflare

Other key metrics used for judging your sites performance include FCP and DCL speeds. First Contentful Paint (FCP) measures the first moment content is loaded onto the screen of the user, answering the user’s question: “is this useful?”. The other metric, DOM Content Loaded (DCL), measures when all stylesheets have loaded and the DOM tree is able to be rendered. Google provides a tool for you to measure your website’s FCP and DCL speeds relative to other sites.

Can spammy websites hosted on the same platform hurt SEO?

Generally speaking, there is no cause for concern as shared hosts shouldn’t hurt your SEO, even if some of the sites on the shared host are less reputable. In the unlikely event you find yourself as the only legitimate website on the host that is almost entirely spam, it might be time to rethink your hosting strategy.

Does downtime hurt SEO?

If your site is down when it’s crawled, it may be temporarily pulled from results. This is why service interruptions such as getting DDoSed during peak purchases times can be more damaging. Typically a site’s ranking will recover when it comes back online. If it’s down for an entire day it may take up to a few weeks to recover.

Don’t be creepy in SEO: an incomplete guide

Everybody likes to win, but playing outside the rules can have consequences. For websites that attempt to circumvent Google’s guidelines in an attempt to trick the search algorithms and web crawlers, a perilous future awaits. Here are a few things that you should make sure you avoid.

Permitting user-generated spam - sometimes unmoderated comment sections run amok with user generated spam ads, complete with links to online pharmacies and other unrelated topics. Leaving these types of links in place lowers the quality of your content and may subject you to penalization. Having trouble handling a spam situation? There are strategies you can implement.

Link schemes - while sharing links with reputable sources is still a legitimate tactic, excessively sharing links is not. Likewise, purchasing large bundles of links in an attempt to boost SEO by artificially passing PageRank is best avoided. There are many link schemes, and if you’re curious whether or not you’re in violation, look at Google’s documentation. If you feel like you might’ve made questionable link decisions in the past and you want to undo them, you can disavow links that point to your site, but use this feature with extreme caution.

Doorway pages - By creating many pages that optimize for specific search phrases, but ultimately point to the same page, some sites attempt to saturate all the search terms around a particular topic. While this might be tempting strategy to gain a lot of SEO very quickly, it may result in all pages losing rank.

Scraping content - In an attempt to artificially build content, some websites will scrape content from other reputable sources and call it their own. Aside from the fact that this behavior can get a site flagged by the Panda algorithm for unrelated or excessive content, it is also in violation of the guidelines and can result in penalization or removal of a website from results.

Hidden text and links - by hiding text inside a webpage so it’s not visible to users, some websites will try to artificially increment the amount of content they have on their site or the amount of instances a keyword occurs. Hiding text behind an image, setting a font size to zero, using CSS to position an element off of the screen, or the classic “white text on a white background” are all tactics to be avoided.

Sneaky redirects - as the name implies, it’s possible to surreptitiously redirect users from the result that they were expecting onto something different. Split cases can also occur where a desktop version of the site will be directed to the intended page while the mobile will be forwarded to full-screen advertising.

Cloaking - by attempting to show different content to search engines and users, some sites will attempt to circumvent the processes a search engine has in place to filter out low value content. While cloaking might have a cool name, it’s in violation and can result in rank reduction or listing removal.

What SEO resources does Google provide?

There are number of sources that can be considered authoritative when it comes to Google SEO. John Mueller, Gary Illyes and (formerly) Matt Cutts, collectively represent a large portion of the official voice of Google search and provide much of the official SEO best practices content. Aside from the videos, blogs, office hours, and other content provided by these experts, Google also provides the Google webmaster blog and Google search console which house various resources and updates.

Last but not least, if you have web properties currently on Cloudflare there are technical optimizations you can make to improve your SEO.

Categories: Technology


Additional Terms