Blogroll: CloudFlare

I read blogs, as well as write one. The 'blogroll' on this site reproduces some posts from some of the people I enjoy reading. There are currently 18 posts from the blog 'CloudFlare.'

Disclaimer: Reproducing an article here need not necessarily imply agreement or endorsement!

Subscribe to CloudFlare feed CloudFlare
The Cloudflare Blog
Updated: 2 hours 57 min ago

The Serverlist: Globally Distributed Websites, $16M Series A, and more

Tue, 11/02/2020 - 19:23
 Globally Distributed Websites, $16M Series A, and more

Check out our twelfth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.

Sign up below to have The Serverlist sent directly to your mailbox.

MktoForms2.loadForm("https://app-ab13.marketo.com", "713-XSC-918", 1654, function(form) {   form.onSuccess(function() {     mktoForm_1654.innerHTML = "Thank you! Your email has been added to our list."     return false   }) }); Your privacy is important to us /* These !important rules are necessary to override inline color and font-size attrs on elements generated from Marketo */ .newsletter form { color:#646567 !important; font-family: inherit !important; font-size: inherit !important; width: 100% !important; } .newsletter form { display: flex; flex-direction: row; justify-content: flex-start; line-height: 1.5em; margin-bottom: 1em; padding: 0em; } .newsletter input[type="Email"], .mktoForm.mktoHasWidth.mktoLayoutLeft .mktoButtonWrap.mktoSimple .mktoButton { border-radius: 3px !important; font: inherit; line-height: 1.5em; padding-top: .5em; padding-bottom: .5em; } .newsletter input[type="Email"] { border: 1px solid #ccc; box-shadow: none; height: initial; margin: 0; margin-right: .5em; padding-left: .8em; padding-right: .8em; /* This !important is necessary to override inline width attrs on elements generated from Marketo */ width: 100% !important; } .newsletter input[type="Email"]:focus { border: 1px solid #3279b3; } .newsletter .mktoForm.mktoHasWidth.mktoLayoutLeft .mktoButtonWrap.mktoSimple .mktoButton { background-color: #f18030 !important; border: 1px solid #f18030 !important; color: #fff !important; padding-left: 1.25em; padding-right: 1.25em; } .newsletter .mktoForm .mktoButtonWrap.mktoSimple .mktoButton:hover { border: 1px solid #f18030 !important; } .newsletter .privacy-link { font-size: .8em; } .newsletter .mktoAsterix, .newsletter .mktoGutter, .newsletter .mktoLabel, .newsletter .mktoOffset { display: none; } .newsletter .mktoButtonWrap, .newsletter .mktoFieldDescriptor { /* This !important is necessary to override inline margin attrs on elements generated from Marketo */ margin: 0px !important; } .newsletter .mktoForm .mktoButtonRow { margin-left: 0.5em; } .newsletter .mktoFormRow:first-of-type, .newsletter .mktoFormRow:first-of-type .mktoFieldDescriptor.mktoFormCol, .newsletter .mktoFormRow:first-of-type .mktoFieldDescriptor.mktoFormCol .mktoFieldWrap.mktoRequiredField { width: 100% !important; } .newsletter .mktoForm .mktoField:not([type=checkbox]):not([type=radio]) { width: 100% !important } iframe[seamless]{ background-color: transparent; border: 0 none transparent; padding: 0; overflow: hidden; } const magic = document.getElementById('magic') function resizeIframe() { const iframeDoc = magic.contentDocument const iframeWindow = magic.contentWindow magic.height = iframeDoc.body.clientHeight magic.width = "100%" const injectedStyle = iframeDoc.createElement('style') injectedStyle.innerHTML = ` body { background: white !important; } .stack { display: flex; align-items: center; } #footerModulec155fe9e-f964-4fdf-829b-1366f112e82b .stack { display: block; } ` magic.contentDocument.head.appendChild(injectedStyle) function onFinish() { setTimeout(() => { magic.style.visibility = '' }, 80) } if (iframeDoc.readyState === 'loading') { iframeWindow.addEventListener('load', onFinish) } else { onFinish() } } async function fetchURL(url) { magic.addEventListener('load', resizeIframe) const call = await fetch(`https://proxy.devrel.workers.dev?domain=${url}`) const text = await call.text() const divie = document.createElement("div") divie.innerHTML = text const listie = divie.getElementsByTagName("a") for (var i = 0; i < listie.length; i++) { listie[i].setAttribute("target", "_blank") } magic.scrolling = "no" magic.srcdoc = divie.innerHTML } fetchURL("https://info.cloudflare.com/index.php/email/emailWebview?mkt_tok=eyJpIjoiTWpsallqZGhNVGhoT0dReSIsInQiOiJLVXlCZ2ZaVVU1WURKWjNSTVBmQmVtOXRyQXZGaDFob3BzZnZDUW9IZzJxZFpWcjJpTm5DeXNXZ1NURldpUW9ma25nRm9GVVV2NjA3QVdRUS9nRURaVlJMOEFBSGpqeVVqOWZuNVhEVzFoaGp5czQ3Y0RWa0dDeE83dXg5L01iUiJ9")
Categories: Technology

Vetflare, Cloudflare's Military Veteran Employee Group Launches

Mon, 10/02/2020 - 18:18
Vetflare, Cloudflare's Military Veteran Employee Group LaunchesVetflare, Cloudflare's Military Veteran Employee Group Launches

“Diversity leads to better outcomes… better decisions, increased innovation, stronger financial returns, and a great place to work for everyone” said Janet Van Huysse, Head of People at Cloudflare during our Q1-2020 kickoff. Veterans, people who have served in the military, are a vital element of a diverse workforce. We come in diverse shapes, sizes, colors, genders, and orientations. We bring diverse skillsets, experiences, and perspectives.  

If you haven’t served in the military and haven’t worked with many veterans, here are some of the things that you can expect from your colleagues or direct reports that are veterans.

Veterans know what it means to SERVE. Indeed, it is a truism that living in service to others is a life well-lived, and that service to others is a foundation of esprit de corps. Though relatively few of us have seen combat, we have all signed a blank check to our nation made payable for any amount, up to and including our lives. This is what it means to become part of something bigger than oneself. This translates to putting our common shared interests ahead of our personal interests even when that means becoming an instrument of a foreign policy we might not agree with.  

Veterans know what it means to be part of a TEAM. The phrase “I’ve got your back” means a lot when it comes from a veteran because they’re referring to the blank check. Just about every veteran you ask will tell you they really miss being part of something bigger than themselves. Companies and organizations in the civilian world that can connect the dots in this way, like Cloudflare’s mission to help build a better internet, unlock the magic that accomplishes the seemingly impossible. We see this at Cloudflare in the incredible pace of product releases AND product improvements. We see this at Cloudflare when people go to the mat for their customers and when people come together to fix a problem.  

Veterans know what it means to focus on a MISSION. When people have bought into the mission, everything and everyone aligns to achieve it. We know that together, as part of a team, with solid leadership, strategy, and tactics we can accomplish the mission. Veterans will help you drop things that are extraneous to the mission and help you focus on the things that will get the mission accomplished. When a veteran on your team asks, “What problem are we trying to solve?” or “Why are we doing this?” you can bet a paycheck that they’re trying to draw a straight line to the goal of the mission.  

Veterans know what it means to COMMIT. Most people view the military as a top-down, hierarchical organization because, well... it is.  But most people don’t realize the level of consensus-driven decision-making that happens prior to an order being given. “Because I told you so” is just not enough of a reason for people to risk their lives or for them to effectively execute their part of a mission. So the military involves their people in mission planning where alternatives are thrashed out, often with great conviction. But when time is up and the mission commander makes their call on how the mission will be carried out, veterans know it’s time to put aside their personal opinions, get onboard, and do whatever it takes to make the plan successful. Jeff Bezos famously calls this “disagree and commit” and veterans are well-practiced in this skill.

Veterans know the importance of MORALE. We’ve seen the unit with everything going for it fail, and we’ve seen the underdog come out on top. We’ve seen troubled units turn themselves around, seemingly overnight. Veterans know how the days drag on endlessly when morale is low, and we know the joy that comes from playing their part in a group that is proud to be doing what they’re doing.

Veterans know how to make DIVERSITY work. We had to because we had no choice in who we worked with in the military. Every year one-third of the people in our units left and new people showed up out of the blue. They were selected by someone else and we couldn’t fire them. So veterans get good at onboarding themselves into new organizations and onboarding new people to their teams. Veterans get good at figuring out what people have to offer and where they have gaps so the team can reshape itself to maximize performance.  

If you’re a veteran reading this, know that Cloudflare has a seat at the table for you. This can be your opportunity to transition into the civilian world, transition into tech, or accelerate your career in tech at a rocket-ship that appreciates what you have to offer.  

Supporting veterans is distinct from supporting their country’s foreign policy. Most Americans recognize the mistake we made in not welcoming home veterans of the Vietnam War because we didn’t support the war at-large. Nowadays, “thank you for your service” is a meaningful phrase most veterans hear with some regularity and I’m here to tell you that it means a lot. And it especially means a lot to those veterans who carry the lifelong burden of combat action.

So we Cloudflarians that are also veterans also want to say thank you to all of YOU for welcoming us into this company, this culture, and this team that is doing so much more than helping to build a better internet.  We are proud and grateful to serve alongside you at CLOUDFLARE.

Categories: Technology

JAMstack at the Edge: How we built Built with Workers… on Workers

Wed, 29/01/2020 - 13:00
 How we built Built with Workers… on Workers

I'm extremely stoked to announce Built with Workers today – it's an awesome resource for exploring what you can build with Cloudflare Workers. As Adam explained in our launch post, showcasing developers building incredible projects with tools like Workers KV or our streaming HTML rewriter is a great way to celebrate users of our platform. It also helps encourage developers to try building their dream app on top of Workers. In this post, I’ll explore some of the architectural and implementation designs we made while building the site.

When we first started planning Built with Workers, we wanted to use the site as an opportunity to build a new greenfield application, showcasing the strength of the Workers platform. The Workers Developer Experience team is cross-functional: while we might spend most of our time improving our docs, or developing features for our command-line interface Wrangler, most of us have spent years developing on the web. The prospect of starting a new application is always fun, but in this instance, it was a prime chance to ask (and answer) the question, "If I could build this site on Workers with whatever tools I want, what would I choose?"

A guiding principle for the Workers platform is ease-of-use. The programming model is simple: it's just JavaScript (or, via WASM, Rust, C, and C++), and you have complete control over the requests coming in and the requests going out from your Workers script. In the same way, while building Built with Workers, it was crucial to find a set of tools that could enable something like this throughout the process of building the entire application. To enable this, we've embraced JAMstack – a software stack comprised of JavaScript, APIs, and markup – with Built with Workers, deploying always up-to-date static builds of the site directly to the edge, using Workers Sites. Our framework of choice, Gatsby.js, provides a set of sane defaults to build a modern web application. To manage content and the layout of the site, we've chosen Sanity.io, a powerful headless CMS that allows us to model the entire website without needing to deploy any databases or spin up any additional infrastructure.

Personally, I'm excited about JAMstack as a methodology for building web applications because of this emphasis on reducing infrastructure: it's incredibly similar to the motivations behind deploying serverless applications using Cloudflare Workers, and as we developed Built with Workers, we discovered a number of these philosophical similarities in JAMstack and Cloudflare Workers – exciting! To help encourage developers to explore building their own JAMstack applications on Workers, I'm also announcing today that we've made the Built with Workers codebase open-source on GitHub – you can check out how the application is developed, built and deployed from start to finish.

In this post, we'll dig into Built with Workers, exploring how it works, the technical decisions we've made, and some of the most fascinating aspects of what it means to build applications on the edge.

 How we built Built with Workers… on WorkersA screenshot of the Built with Workers homepageExploring the JAMstack

My first encounter with tooling that would ultimately become part of "JAMstack" was in 2013. I noticed the huge proliferation of developers building personal "static" sites – taking blog posts written primarily in Markdown, and pushing them through frameworks like Jekyll to build full websites that could easily be deployed to a number of CDNs and file hosting platforms. These static sites were fast – they are just HTML, CSS, and JavaScript – and easy to update. The average developer spends their days maintaining large and complex software systems, so it was relaxing to just write Markdown, plug in some re-usable HTML and CSS, and deploy your website. The advent of static sites, of course, isn't new – but after years of increasingly complex full-stack technology, the return to simplicity was a promising development for many kinds of websites that didn't need databases, or any dynamic content.

In the last couple years, JAMstack has built upon that resurgence, and represents an approach to building complete, complex applications using the same tooling that has become the first choice for developers building their simple personal sites. JAMstack is comprised of three conceptual pieces – JavaScript, APIs, and Markup – each of which is a crucial aspect of simplifying our web applications and making them easy to write, build, and deploy.

J is for JavaScript

The JAMstack architecture relies heavily on the ubiquity of JavaScript as the language of the web. Many modern web applications use powerful, dynamic front-end frameworks like React and Vue to render user interfaces and process state on the client for users. On the backend, or in Workers' case, on the edge, any dynamic functionality in your JAMstack application should be built on top of JavaScript, often working in the request-response model that full-stack developers are accustomed to.

The Workers platform is perfectly suited to this requirement! As a developer building on Workers, you have total control of incoming requests and outgoing responses, using the JavaScript Service Worker APIs you know and love. We built Workers Sites as an extension of the Workers platform (and Workers KV as a storage mechanism at the edge), making it possible to deploy your site assets using a single command in Wrangler: wrangler publish.

When your Workers Site receives a new request, we'll execute JavaScript at the edge to look up a piece of content from Workers KV, and serve it back to the client at lightning speed. Remarkably, you can deploy JAMstack applications on Workers with no additional configuration besides generating your Workers Siteby design, Workers Sites is built to serve as an exceptional JAMstack deployment platform.

A is for APIs

The advent of static site tooling for personal sites makes sense: your site is a few pages: a few blog posts, for instance, and the classic "About" or "Contact" page. When it's compiled to HTML, the footprint is quite small! This small footprint is what makes static sites easy to reason about: they're trivial to host in terms of bandwidth and storage costs, and they rarely change, so they're easily cacheable.

While that works for personal sites, complex applications actually have data requirements! We need to talk to the user data in our databases, and analytics information in our data warehouses. JAMstack apps tackle this by definitively stating that these data sources should be accessible via HTTPS APIs, consumable by the application as a way to provide dynamic information to clients.

Workers is a fascinating platform in regards to JAMstack APIs. It can serve as a gateway to your data, or as a place to persist and return data itself. I can, for instance, expose an API endpoint via my Workers script without giving clients access to my origin. I can also use tooling like Workers KV to persist data directly on the edge, and when a user requests that data, I can resolve the data by returning JSON directly from my application.

This flexibility has been an unexpected part of the experience of developing Built with Workers. In a later section of this post, I'll talk about how we developed an integral feature of the site based on the unique strengths of Workers as a way to host static assets and as a dynamic JavaScript execution platform. This has remarkable implications that blur the lines between classic static sites and dynamic applications, and I'm really excited about it.

M is for Markup

A breakthrough moment in my understanding of JAMstack came at the beginning of this year. I was working on a job board for frontend developers, using the static site framework Gatsby.js and Sanity.io, a headless CMS tool that allows developers to model content without maintaining a database or any infrastructure. (As a reminder – this set of tools is identical to what we ultimately used to develop Built with Workers. It's a very good stack!)

SEO is crucial to a job board, and as I began to explore how to drive more traffic to my job board, I landed on the idea of generating a huge amount of search-oriented content automatically from the job data I already had. For instance, if I had job posts with the keywords "React", "Europe", and "Senior" (as in "senior developer"), what if I created pages with titles like "Senior React developer jobs in Europe", or "Remote Angular jobs"? This approach would allow the site to begin ranking for a variety of job positions, locations, and experience levels, and as more jobs were posted on the site, each of these pages would be enriched with more useful information and relevant content, helping it rank higher in search.

"But static sites are... static!", I told myself. Would I need to build an entire dynamic API on top of my static site, just to be able to serve these search-engine optimized pages? This led me to a "eureka" moment with Gatsby – I could define markup (the "M" in JAMstack), and when I'm building my site, I could look at all the available job data I had, cycling through every available keyword combination and inserting it into my markup to generate thousands of these pages. As I later learned, this idea is not necessarily unique to Gatsby – it is possible, for instance, to automate getting data from your API and writing it to data files in earlier static site frameworks like Hugo – but it is a first-class citizen in Gatsby. There are a ton of data sources available via Gatsby plugins, and because they're all exposed via HTTPS, the workflow is standardized inside of the framework.

In Built with Workers, we connect to the Sanity.io CMS instance at build-time: crucially, by the time that the site has been deployed to Workers, the application effectively has no idea what Sanity even is! Our Gatsby application connects to Sanity.io via an HTTPS API, and using GraphQL, we look at all the data that we have in our CMS, and make decisions about what pages to generate and how to render the site's interface, ultimately resulting in a statically-built application that is derived from dynamic data.

This emphasis on the build step in JAMstack is quite different than the classic web application. In the past, a user requested data, a web server looked at what the user was requesting, and then the user waits, as the server gets that data, returns JSON, or interpolates it into templates written in tools like Pug or ERB. With JAMstack, the pages are already built, and the deployed application is just a collection of plain HTML, CSS, and JavaScript.

Why Cloudflare Workers?

Cloudflare's network is a fascinating place to deploy JAMstack applications. Yes, Cloudflare's edge network can act as a CDN for your static assets, like your CSS stylesheets, or your client-side JavaScript code. But with Workers, we now have the ability to run JavaScript side-by-side with our static assets. In most JAMstack applications, the CDN is simply a bucket where your application ends up. Usually, the CDN is the most boring part of the stack! With Cloudflare Workers, we don't just have a CDN: we also have access to an extremely low-latency, fully-featured JavaScript runtime.

The implications of this on the standard JAMstack workflow are, frankly, mind-boggling, and as part of developing Built with Workers, we've been exploring what it means to have this runtime available side-by-side with our statically-built JAMstack application.

To demonstrate this, we’ve implemented a bookmarking feature, which allows users of Built with Workers to bookmark projects. If you look at a project's usage of our streaming HTML rewriter and say "Wow, that's cool!", you can also bookmark for the project to show your support. This feature, rendered as a button tag is deceptively simple: it's a single piece of the user interface that makes use of the entirety of the Workers platform, to provide user-specific dynamic functionality. We'll explore this in greater detail later in the post – see "Enhancing static sites at the edge".

i'm super excited about this and spent some time this morning re-writing my first take at this to be something that i think is super compelling

Categories: Technology

Announcing Built with Workers

Wed, 29/01/2020 - 13:00
Announcing Built with Workers

Ever since its initial release, Cloudflare Workers has given JavaScript developers a platform to enable building high-performance applications with automatic scaling.

As with any new technology, we know it can be a bit intimidating to get started. For one thing, running code on the edge is a paradigm shift—forcing us to rethink classic web architecture problems, or removing them altogether. For another, since you can build just about anything, it can be challenging to figure out what to build first.

Today we’re launching Built with Workers, a new site designed to help get those creative juices flowing and unblock you, by answering that simple but important question: What can I build with Cloudflare Workers?

Announcing Built with Workers

Some time in 1999, at age 11, I received my first graphing calculator. It was a TI-82 that my older sister no longer needed. It was on this very calculator that I learned to write code. Looking back, I’m not sure how exactly I had the patience or sanity to figure it all out.

It was a mess. Among the many difficulties were that I had to type the code out on the calculator’s non-QWERTY keyboard, the language I was writing in didn’t have functions, and oh yeah, the text editor would frequently bug out and I’d inexplicably lose half or all of my code.

But what was perhaps more challenging than all of that, was that I had absolutely zero code examples to draw inspiration from.

I remember when I stumbled on a design pattern to handle input. It was quite the eureka moment. With only labels and gotos, I would have to check if each key was pressed and then loop back around to do it all over again. Little did I know I’d be programming games just about the same way today using  requestAnimationFrame .

Though I was able to make a few simple programs hunting around like this, I quickly hit my limits and stopped writing them.

A couple of years later, a friend of mine, with his fancy-pantsed TI-83 Silver Edition calculator with 4× the RAM of mine—I was a bit jealous—showed me a program that came with his fancy new calculator.

It was called Phoenix. If you ever played any graphing calculator games, it was probably this one. It was fast and action packed, flying a spaceship around shooting enemies. It was beautiful.

Seeing this game totally changed my perspective on the platform. I went on to create a couple of games involving similar mechanics and a similar animation style, all because I’d seen this game on a friend’s calculator.

It opened up my eyes to what was possible, gave me the confidence to try things I previously thought were impossible, and brought out the detective inside me to want to figure out how they were able to build each piece of functionality.

A few other friends of mine started writing programs too. We would trade our programs in the back of math class together using a cable to attach the two calculators together.

Importantly, when you’d receive a program from another friend’s calculator, you could run it and you could view and manipulate the source code. This allowed us to collaborate on games, by passing them back and forth.

Our apps and games became more complex and interesting. Non-coder friends of ours were becoming interested in our projects too, and we started sharing games at lunch. We would get great feedback from them, leading us to fix bugs, build more stylish graphics and intro sequences, and streamline the player experience.

I’m sure our teachers loved us...

A few years later, I got the Internet.

What made the Internet so exciting to me was that by design, the source code of any website was just sitting right there, one keyboard command away. Just like the calculators.

So I did what many of us did back then—and still do today: if I saw something cool, I stole it. Oh nice hover animation: I’ll take that thank you very much. Cool button design: yup, that’s mine now.

Back then we called them websites not apps, and ourselves web designers not frontend engineers, but in all of the time since then, the web platform hasn’t really changed.

Although the web today is more complex, often with many more layers of abstraction, it’s still the case that if you can see something in your browser, most likely you can quickly access, study, copy, manipulate, borrow and steal the source code that created it in just a few seconds.

This is why I’m personally so excited to bring Built with Workers to the community, and to learn from the projects on it myself.

Every project page has a section in which the creators get to describe how their project uses Cloudflare Workers. Many projects use Workers KV, our distributed key-value store, and Workers Sites, our edge-based static site hosting, and some projects are entirely written, built, tested, and deployed with Cloudflare Workers.

Even better than that, many projects on Built with Workers are open-source, featuring a direct link to the Github repo, allowing you to quickly get at the source. The Built with Workers site itself is one of these open-source projects on the Built with Workers site.

We hope that the projects on Built with Workers will help inspire you to build your next project, by seeing just what’s possible.

Visit Built with Workers

It’s been thrilling to see the incredible projects people are building. If you’ve got a project you’d like to share, please fill out this form.

Categories: Technology

Empowering Your Privacy

Tue, 28/01/2020 - 08:00
Empowering Your PrivacyEmpowering Your Privacy

Happy Data Privacy Day! At Cloudflare, our mission is to help build a better Internet, and we believe data privacy is core to that mission. But we know words are cheap — even data brokers who sell your personal information will tell you that “privacy is important” to them. So we wanted to take the opportunity on this Data Privacy Day to show you how our commitment to privacy crosses all levels of the work we do at Cloudflare to help make the Internet more private and secure — and therefore better — for everyone.

Privacy on the Internet means different things to different people. Maybe privacy means you get to control your personal data — who can collect it and how it can be used. Or that you have the right to access and delete your personal information. Or maybe it means your online life is protected from government surveillance or from ad trackers and targeted advertising. Maybe you think you should be able to be completely anonymous online. At Cloudflare, we think all these flavors of privacy are equally important, and as we describe in more detail below, we’ve taken steps to address each of these privacy priorities.

Governments don’t necessarily take the same view on what privacy should mean either. Europe has its General Data Protection Regulation (GDPR), under which people have the right to control how their information is used, and the protection of data is a fundamental right under the EU Charter of Fundamental Rights. The United States takes a consumer-centric approach focusing on deceptive use of information, the sale of information, and privacy from unwarranted government surveillance. Brazil’s privacy law is similar to that of Europe’s, and Canada, New Zealand, Japan, Australia, China, and Singapore (to name a few) have some variation on the theme of a national, comprehensive privacy law.

Rather than viewing privacy of personal data as an ocean of data to be regulated through the lens of any particular government, we think privacy merits a different approach. To begin with, we don’t think there should be an ocean of personal data. We believe in empowering individuals and entities of all sizes with technological tools to reduce the amount of personal data that gets funneled into the data ocean — regardless of whether you live in a country with laws protecting the privacy of your personal data. If we can build tools to help you share less personal data online, then that’s a win for privacy no matter your privacy priorities or country of residence.

Technologies that Enable the Privacy of Personal Data

We’ve said it before — the Internet was not built with privacy and security in mind. But as the Internet has become more essential to daily life and more central to even the most critical corporate and government systems, the world has needed better tools to provide privacy and security for these online functions. When we talk about building a better Internet, for us that means (re)building the Internet with privacy baked in. Since Cloudflare launched in 2010, we’ve released a number of state-of-the-art, privacy-enhancing technologies that can help individuals, businesses, and governments alike:

  • Universal SSL: In 2014, there were 2 million websites that supported encrypted connections. In September of that year we introduced universal SSL (now called Transport Layer Security) for all of our customers, paying and free, and overnight we were able to make SSL easily available at scale to the millions of websites that use Cloudflare. Supporting SSL means that we support encrypting the content of web pages, which had previously been sent as plain text over the Internet. It’s like sending your private, personal information in a locked box instead of on a postcard.
  • Privacy Pass: Cloudflare supports Privacy Pass, which lets users prove their identity across multiple sites anonymously without enabling tracking. When people use anonymity services or shared IPs, it makes it more difficult for website protection services like Cloudflare to identify their requests as coming from legitimate users and not bots. To help reduce the friction for these users — which include some of the most vulnerable users online — Privacy Pass provides them with a way to prove they are legitimate across multiple sites on the Cloudflare network. This is done without revealing their identity, and without exposing Cloudflare customers to additional threats from malicious bots.
  • ESNI: We announced beta support for encrypted Server Name Identification (ESNI) in 2018. Server Name Identification (SNI) was created to allow multiple websites to exist on the same IP address (something that became necessary with the shortage of IPv4 addresses), but it can reveal which websites users are visiting. As described here, ESNI encrypts the SNI, fixing what has been a glaring privacy hole.
  • 1.1.1.1 Public DNS Resolver: In 2018, we announced our public privacy-focused resolver, the 1.1.1.1 Public DNS Resolver (which also turned out to be the world’s fastest public DNS resolver). It was our first consumer product, it’s free, and we built it because we believe that consumers should have the ability to browse the Internet without providers in the middle monitoring user activity. So our public DNS resolver service will never store 1.1.1.1 public DNS resolver users’ IP addresses (referred to as the source IP address) in non-volatile storage, and we anonymize the source IP addresses of 1.1.1.1 public DNS resolver users before logging any data. This way, we have no information about what website a specific user has looked up using the 1.1.1.1 Public DNS Resolver service. We can’t tell who is visiting any given website, and we don’t want to know.
  • DNS over HTTPS (DoH): Using the 1.1.1.1 Public DNS Resolver means that your ISP won’t get all of your browsing data from acting as your DNS resolver, but they will still get it from provisioning those requests unless you encrypt that channel. For those reasons, we added support for DoH. DNS requests can contain some alarmingly personal data, such as your location, the domains and subdomains you have visited, the time of day requests were submitted, and how long you stayed on certain sites. Encrypting those requests ensures that only the user and the resolver get that information, and that no one involved in the transit in between sees it. In addition to DoH, we’ve partnered with Mozilla to support private web browsing in Firefox. We have also employed query minimization to ensure that those who don't need to access the full URL you are requesting, simply don’t.
  • 1.1.1.1 Mobile Application with WARP: People are accessing the Internet from their mobile devices more and more, so in 2019 we launched our 1.1.1.1 Mobile Application with WARP. You can enable our mobile application in DNS-only mode to ensure that all of your mobile device's DNS queries are sent to our 1.1.1.1 Public DNS Resolver using either DNS over HTTPS or DNS over TLS. You can also enable WARP in our mobile application, which includes everything from our DNS-only mode and will also route traffic from your device through the Cloudflare network via encrypted tunnels. This means that even if you are accessing websites or mobile applications that are not using HTTPS, the content transmitted to and from your device will be encrypted if you have WARP enabled and will not be sent as plain text over the Internet.  
How We Do Privacy at Cloudflare

The privacy-enhancing technologies we build are public examples of how we put our money where our mouth is when it comes to privacy. We also want to tell you about the ways — some public, some not — we infuse privacy principles at all levels at Cloudflare.

  • Employee Education and Mindset: An understanding of privacy is core to a Cloudflare employee’s experience right from the start. Employees learn about the role privacy and security play in helping to build a better Internet in their first week at Cloudflare. During the comprehensive employee orientation, we stress the role each employee plays in keeping the company and our customers secure. All employees are required to take annual data protection training, which introduces employees to the fundamentals of the Fair Information Practices (FIPs), GDPR and other applicable laws, and we do targeted training for individual teams, depending on their engagement with personal data, throughout the year.
  • Privacy in Product Development: We have built the FIPs and GDPR requirements into product development. Cloudflare employees take privacy-by-design seriously. We develop products and processes with the principles of data minimization, purpose limitation, and data security always front of mind. We have a product development lifecycle that includes performing privacy impact assessments when we may process personal data. We retain personal data we process for as short a time as necessary to provide our services to our customers. We do not cross-track individual Internet users across sites. We don’t sell personal information. We don’t monetize DNS requests. We detect, deter, and deflect bad actors — we’re not in the business of looking at what any one person (or more specifically, browser) is doing when they browse the Internet. That’s not what we’re about.
  • Internal Compliance with Privacy Regulations: Even before Europe’s watershed GDPR went into effect in 2018 and the California Consumer Privacy Act (CCPA) took effect earlier this month, we were focusing on how to implement the privacy principles embodied in regulations globally. A key part of this has been to minimize our collection of personal data and to only use personal data for the purpose for which it was collected. We view the GDPR and CCPA as a codification of many of the steps we were already taking: only collect the personal data you need to provide the service you’re offering; don’t sell personal information; give people the ability to access, correct, or delete their personal information; and give our customers control over the information that, for example, is cached on our content delivery network (CDN), stored in Workers Key Value Store, or captured by our web application firewall (WAF).
  • Security as a Means to Enhance Privacy: We’re a security company, so naturally we view security as a critical element of ensuring data privacy. In addition to the extensive internal security mechanisms we have in place to protect our customers’ data, we also have become certified under industry standards to demonstrate our commitment to data security. We are ISO 27001 and AICPA SOC 2 Type II certified. Cloudflare's SOC 2 Type II report covers security, confidentiality, and availability controls to protect customer data. We also maintain a SOC 3 report which is the public report of Security, Confidentiality, and Availability controls. In addition to this, we comply with our obligations under the EU Directive on Security of Network and Information Systems (NIS).
  • Privacy-focused Response to Government and Third-Party Requests for Information: Our respect for our customers' privacy applies with equal force to commercial requests and to government or law enforcement requests. Any law enforcement requests that we receive must strictly adhere to the due process of law and be subject to judicial oversight. We believe that U.S. law enforcement requests for the personal data of a non-U.S. person that conflict with the privacy laws of that person’s country of residence (such as the EU GDPR) should be legally challenged. Consistent with both the U.S. CLOUD Act and the proceedings in the Microsoft Ireland case,  providers like Cloudflare may ask U.S. courts to quash requests from U.S. law enforcement based on such a conflict. In addition, it is our policy to notify our customers of a subpoena or other legal process requesting their customer or billing information before disclosure of that information, whether the legal process comes from the government or private parties involved in civil litigation, unless legally prohibited. We also publicly report on the types of requests we receive, as well as our responses, in our semi-annual  Transparency Report. Finally, we publicly list certain types of actions that Cloudflare has never taken in response to government requests, and we commit that if Cloudflare were asked to do any of the things on this list, we would exhaust all legal remedies in order to protect our customers from what we believe are illegal or unconstitutional requests.
  • Bringing Privacy and Security to Vulnerable Entities (Project Galileo): Since 2014, we have been providing a wide range of security products to important, yet vulnerable, voices on the internet with Project Galileo. Privacy is essential to the more than 900 organizations receiving free services under the Project, as many face threats from powerful adversaries. These organizations range from humanitarian groups and non-profit organizations, to journalism and media sites that are repeatedly flooded with malicious attacks in an attempt to knock them offline.
  • Spreading the Message on What We Think Privacy Should Look Like: It isn’t enough to build tools with privacy in mind; we also feel a responsibility to share best practices we have learned and work with policymakers to help them understand the implications of regulation on complex technologies. For example, Cloudflare has actively supported efforts to develop a framework for US Federal privacy standards, urging policymakers to adopt technology-neutral approaches that allow standards to change and improve as technology does. In Europe, we are engaged in the ongoing discussions on the draft ePrivacy Regulation, which aims to enshrine the important principle of confidentiality of communications and guides companies on cookie usage and direct marketing. We are also actively contributing to the EU debate on the draft eEvidence Regulation, which seeks to facilitate cross-border access to data. We believe this initiative must fully respect the EU Charter of Fundamental Rights and the EU data protection framework.
So What’s Next?

Protecting the privacy of personal data is an ongoing journey. Our approach has never been to check the boxes of compliance and move on. We are continually evaluating how we handle personal data and looking for ways to minimize the amount of personal data we receive. We will continue to be self-critical and examine our own motivations for the technologies we develop. And we will keep working, just as we have for the past ten years, to find new ways to secure privacy and security for our customers and for the Internet as a whole.

Categories: Technology

JavaScript Libraries Are Almost Never Updated Once Installed

Mon, 27/01/2020 - 16:48
JavaScript Libraries Are Almost Never Updated Once Installed

Cloudflare helps run CDNJS, a very popular way of including JavaScript and other frontend resources on web pages. With the CDNJS team’s permission we collect anonymized and aggregated data from CDNJS requests which we use to understand how people build on the Internet. Our analysis today is focused on one question: once installed on a site, do JavaScript libraries ever get updated?

Let’s consider jQuery, the most popular JavaScript library on Earth. This chart shows the number of requests made for a selected list of jQuery versions over the past 12 months:

JavaScript Libraries Are Almost Never Updated Once InstalledSpikes in the CDNJS data as you see with version 3.3.1 are not uncommon as very large sites add and remove CDNJS script tags.

We see a steady rise of version 3.4.1 following its release on May 2nd, 2019. What we don’t see is a substantial decline of old versions. Version 3.2.1 shows an average popularity of 36M requests at the beginning of our sample, and 29M at the end, a decline of approximately 20%. This aligns with a corpus of research which shows the average website lasts somewhere between two and four years. What we don’t see is a decline in our old versions which come close to the volume of growth of new versions when they’re released. In fact the release of 3.4.1, as popular as it quickly becomes, doesn’t change the trend of old version deprecation at all.

If you’re curious, the oldest version of jQuery CDNJS includes is 1.10.0, released on May 25, 2013. The project still gets an average of 100k requests per day, and the sites which use it are growing in popularity:

JavaScript Libraries Are Almost Never Updated Once Installed


To confirm our theory, let’s consider another project, TweenMax:

JavaScript Libraries Are Almost Never Updated Once Installed


As this package isn’t as popular as jQuery, the data has been smoothed with a one week trailing average to make it easier to identify trends.

Version 1.20.4 begins the year with 18M requests, and ends it with 14M, a decline of about 23%, again in alignment with the loss of websites on the Internet. The growth of 2.1.3 shows clear evidence that the release of a new version has almost no bearing on the popularity of old versions, the trend line for those older versions doesn’t change even as 2.1.3 grows to 29M requests per day.

JavaScript Libraries Are Almost Never Updated Once Installed

One conclusion is whatever libraries you publish will exist on websites forever. The underlying web platform consequently must support aged conventions indefinitely if it is to continue supporting the full breadth of the web.

Cloudflare is very interested in how we can contribute to a web which is kept up-to-date. Please make suggestions in the comments below.

Categories: Technology

Announcing the Cloudflare Access App Launch

Thu, 16/01/2020 - 16:13
Announcing the Cloudflare Access App LaunchAnnouncing the Cloudflare Access App Launch

Every person joining your team has the same question on Day One: how do I find and connect to the applications I need to do my job?

Since launch, Cloudflare Access has helped improve how users connect to those applications. When you protect an application with Access, users never have to connect to a private network and never have to deal with a clunky VPN client. Instead, they reach on-premise apps as if they were SaaS tools. Behind the scenes, Access evaluates and logs every request to those apps for identity, giving administrators more visibility and security than a traditional VPN.

Administrators need about an hour to deploy Access. End user logins take about 20 ms, and that response time is consistent globally. Unlike VPN appliances, Access runs in every data center in Cloudflare’s network in 200 cities around the world. When Access works well, it should be easy for administrators and invisible to the end user.

However, users still need to locate the applications behind Access, and for internally managed applications, traditional dashboards require constant upkeep. As organizations grow, that roster of links keeps expanding. Department leads and IT administrators can create and publish manual lists, but those become a chore to maintain. Teams need to publish custom versions for contractors or partners that only make certain tools visible.

Starting today, teams can use Cloudflare Access to solve that challenge. We’re excited to announce the first feature in Access built specifically for end users: the Access App Launch portal.

The Access App Launch is a dashboard for all the applications protected by Access. Once enabled, end users can login and connect to every app behind Access with a single click.

How does it work?

When administrators secure an application with Access, any request to the hostname of that application stops at Cloudflare’s network first. Once there, Cloudflare Access checks the request against the list of users who have permission to reach the application.

To check identity, Access relies on the identity provider that the team already uses. Access integrates with providers like OneLogin, Okta, AzureAD, G Suite and others to determine who a user is. If the user has not logged in yet, Access will prompt them to do so at the identity provider configured.

Announcing the Cloudflare Access App Launch

When the user logs in, they are redirected through a subdomain unique to each Access account. Access assigns that subdomain based on a hostname already active in the account. For example, an account with the hostname “widgetcorp.tech” will be assigned “widgetcorp.cloudflareaccess.com”.

Announcing the Cloudflare Access App Launch

The Access App Launch uses the unique subdomain assigned to each Access account. Now, when users visit that URL directly, Cloudflare Access checks their identity and displays only the applications that the user has permission to reach. When a user clicks on an application, they are redirected to the application behind it. Since they are already authenticated, they do not need to login again.

In the background, the Access App Launch decodes and validates the token stored in the cookie on the account’s subdomain.

How is it configured?

The Access App Launch can be configured in the Cloudflare dashboard in three steps. First, navigate to the Access tab in the dashboard. Next, enable the feature in the “App Launch Portal” card. Finally, define who should be able to use the Access App Launch in the modal that appears and click “Save”. Permissions to use the Access App Launch portal do not impact existing Access policies for who can reach protected applications.

Announcing the Cloudflare Access App Launch

Administrators do not need to manually configure each application that appears in the portal. Access App Launch uses the policies already created in the account to generate a page unique to each individual user, automatically.

Defense-in-depth against phishing attacks

Phishing attacks attempt to trick users by masquerading as a legitimate website. In the case of business users, team members think they are visiting an authentic application. Instead, an attacker can present a spoofed version of the application at a URL that looks like the real thing.

Take “example.com” vs “examрle.com” - they look identical, but one uses the Cyrillic “р” and becomes an entirely different hostname. If an attacker can lure a user to visit “examрle.com”, and make the site look like the real thing, that user could accidentally leak credentials or information.

Announcing the Cloudflare Access App Launch

To be successful, the attacker needs to get the victim to visit that fraudulent URL. That frequently happens via email from untrusted senders.

The Access App Launch can help prevent these attacks from targeting internal tools. Teams can instruct users to only navigate to internal applications through the Access App Launch dashboard. When users select a tile in the page, Access will send users to that application using the organization’s SSO.

Cloudflare Gateway can take it one step further. Gateway’s DNS resolver filtering can help defend from phishing attacks that utilize sites that resemble legitimate applications that do not sit behind Access. To learn more about adding Gateway, in conjunction with Access, sign up to join the beta here.

What’s next?

As part of last week’s announcement of Cloudflare for Teams, the Access App Launch is now available to all Access customers today. You can get started with instructions here.

Interested in learning more about Cloudflare for Teams? Read more about the announcement and features here.

Categories: Technology

Introducing Cloudflare for Campaigns

Wed, 15/01/2020 - 12:30
Introducing Cloudflare for CampaignsIntroducing Cloudflare for Campaigns


During the past year, we saw nearly 2 billion global citizens go to the polls to vote in democratic elections. There were major elections in more than 50 countries, including India, Nigeria, and the United Kingdom, as well as elections for the European Parliament. In 2020, we will see a similar number of elections in countries from Peru to Myanmar. In November, U.S citizens will cast their votes for the 46th President, 435 seats in the U.S House of Representatives, 35 of the 100 seats in the U.S. Senate, and many state and local elections.

Recognizing the importance of maintaining public access to election information, Cloudflare launched the Athenian Project in 2017, providing U.S. state and local government entities with the tools needed to secure their election websites for free. As we’ve seen, however, political parties and candidates for office all over the world are also frequent targets for cyberattack. Cybersecurity needs for campaign websites and internal tools are at an all time high.

Although Cloudflare has helped improve the security and performance of political parties and candidates for office all over the world for years, we’ve long felt that we could do more. So today, we’re announcing Cloudflare for Campaigns, a suite of Cloudflare services tailored to campaign needs. Cloudflare for Campaigns is designed to make it easier for all political campaigns and parties, especially those with small teams and limited resources, to get access to cybersecurity services.

Risks faced by political campaigns

Since Russians attempted to use cyberattacks to interfere in the U.S. Presidential election in 2016, the news has been filled with reports of cyber threats against political campaigns, in both the United States and around the world. Hackers targeted the Presidential campaigns of Emmanuel Macron in France and Angela Merkel in Germany with phishing attacks, the main political parties in the UK with DDoS attacks, and congressional campaigns in California with a combination of malware, DDoS attacks and brute force login attempts.

Both because of our services to state and local government election websites through the Athenian Project and because a significant number of political parties and candidates for office use our services, Cloudflare has seen many attacks on election infrastructure and political campaigns firsthand.

During the 2020 U.S. election cycle, Cloudflare has provided services to 18 major presidential campaigns, as well as a range of congressional campaigns. On a typical day, Cloudflare blocks 400,000 attacks against political campaigns, and, on a busy day, Cloudflare blocks more than 40 million attacks against campaigns.

What is Cloudflare for Campaigns?

Cloudflare for Campaigns is a suite of Cloudflare products focused on the needs of political campaigns, particularly smaller campaigns that don’t have the resources to bring significant cybersecurity resources in house. To ensure the security of a campaign website, the Cloudflare for Campaigns package includes Business-level service, as well as security tools particularly helpful for political campaigns websites, such as the web application firewall, rate limiting, load balancing, Enterprise level “I am Under Attack Support”, bot management, and multi-user account enablement.

Introducing Cloudflare for Campaigns

To ensure the security of internal campaign teams, the Cloudflare for Campaigns service will also provide tools for campaigns to ensure the security of their internal teams with Cloudflare Access, allowing for campaigns to secure, authenticate, and monitor user access to any domain, application, or path on Cloudflare, without using a VPN. Along with Access, we will be providing Cloudflare Gateway with DNS-based filtering at multiple locations to protect campaign staff as they navigate the Internet by keeping malicious content off the campaign’s network using DNS filtering, helping prevent users from running into phishing scams or malware sites. Campaigns can use Gateway after the product’s public release.

Cloudflare for Campaigns also includes Cloudflare reliability and security guide, which lists a best practice guide for political campaigns to maintain their campaign site and secure their internal teams.

Regulatory Challenges

Although there is widespread agreement that campaigns and political parties face threats of cyberattack, there is less consensus on how best to get political campaigns the help they need.  Many political campaigns and political parties operate under resource constraints, without the technological capability and financial resources to dedicate to cybersecurity. At the same time, campaigns around the world are the subject of a variety of different regulations intended to prevent corruption of democratic processes. As a practical matter, that means that, although campaigns may not have the resources needed to access cybersecurity services, donation of cybersecurity services to campaigns may not always be allowed.

In the U.S., campaign finance regulations prohibit corporations from providing any contributions of either money or services to federal candidates or political party organizations. These rules prevent companies from offering free or discounted services if those services are not provided on the same terms and conditions to similarly situated members of the general public. The Federal Elections Commission (FEC), which enforces U.S. campaign finance laws, has struggled with the issue of how best to apply those rules to the provision of free or discounted cybersecurity services to campaigns. In consideration of a number of advisory opinions, they have publicly wrestled with the competing priorities of securing campaigns from cyberattack while not opening a backdoor to donation of goods services that are intended to curry favors with particular candidates.

The FEC has issued two advisory opinions to tech companies seeking to provide free or discounted cybersecurity services to campaigns. In 2018, the FEC approved a request by Microsoft to offer a package of enhanced online account security protections for “election-sensitive” users. The FEC reasoned that Microsoft was offering the services to its paid users “based on commercial rather than political considerations, in the ordinary course of its business and not merely for promotional consideration or to generate goodwill.” In July 2019, the FEC approved a request by a cybersecurity company to provide low-cost anti-phishing services to campaigns because those services would be provided in the ordinary course of business and on the same terms and conditions as offered to similarly situated non-political clients.

In September 2018, a month after Microsoft submitted its request, Defending Digital Campaigns (DDC), a nonprofit established with the mission to “secure our democratic campaign process by providing eligible campaigns and political parties, committees, and related organizations with knowledge, training, and resources to defend themselves from cyber threats,” submitted a request to the FEC to offer free or reduced-cost cybersecurity services, including from technology corporations, to federal candidates and parties. Over the following months, the FEC issued and requested comment on multiple draft opinions on whether the donation was permissible and, if so, on what basis. As described by the FEC, to support its position, DDC represented that “federal candidates and parties are singularly ill-equipped to counteract these threats.” The FEC’s advisory opinion to DDC noted:

“You [DDC] state that presidential campaign committees and national party committees require expert guidance on cybersecurity and you contend that the 'vast majority of campaigns' cannot afford full-time cybersecurity staff and that 'even basic cybersecurity consulting software and services' can overextend the budgets of most congressional campaigns. AOR004. For instance, you note that a congressional candidate in California reported a breach to the Federal Bureau of Investigation (FBI) in March of this year but did not have the resources to hire a professional cybersecurity firm to investigate the attack, or to replace infected computers. AOR003.”

In May 2019, the FEC approved DDC’s request to partner with technology companies to provide free and discounted cybersecurity services “[u]nder the unusual and exigent circumstances” presented by the request and “in light of the demonstrated, currently enhanced threat of foreign cyberattacks against party and candidate committees.”

All of these opinions demonstrate the FEC’s desire to allow campaigns to access affordable cybersecurity services because of the heightened threat of cyberattack, while still being cautious to ensure that those services are offered transparently and consistent with the goals of campaign finance laws.

Partnering with DDC to Provide Free Services to US Candidates

We share the view of both DDC and the FEC that political campaigns -- which are central to our democracy -- must have the tools to protect themselves against foreign cyberattack. Cloudflare is therefore excited to announce a new partnership with DDC to provide Cloudflare for Campaigns for free to candidates and parties that meet DDC’s criteria.

Introducing Cloudflare for Campaigns

To receive free services under DDC, political campaigns must meet the following criteria, as the DDC laid out to the FEC:

  • A House candidate’s committee that has at least $50,000 in receipts for the current election cycle, and a Senate candidate’s committee that has at least $100,000 in receipts for the current election cycle;
  • A House or Senate candidate’s committee for candidates who have qualified for the general election ballot in their respective elections; or
  • Any presidential candidate’s committee whose candidate is polling above five percent in national polls.

For more information on eligibility for these services under DDC and the next steps, please visit cloudflare.com/campaigns/usa.

Election package

Although political campaigns are regulated differently all around the world, Cloudflare believes that the integrity of all political campaigns should be protected against powerful adversaries. With this in mind, Cloudflare will therefore also be offering Cloudflare for Campaigns as a paid service, designed to help campaigns all around the world as we attempt to address regulatory hurdles. For more information on how to sign up for the Cloudflare election package, please visit cloudflare.com/campaigns.






Categories: Technology

A cost-effective and extensible testbed for transport protocol development

Tue, 14/01/2020 - 16:07
A cost-effective and extensible testbed for transport protocol development

This was originally published on Perf Planet's 2019 Web Performance Calendar.

At Cloudflare, we develop protocols at multiple layers of the network stack. In the past, we focused on HTTP/1.1, HTTP/2, and TLS 1.3. Now, we are working on QUIC and HTTP/3, which are still in IETF draft, but gaining a lot of interest.

QUIC is a secure and multiplexed transport protocol that aims to perform better than TCP under some network conditions. It is specified in a family of documents: a transport layer which specifies packet format and basic state machine, recovery and congestion control, security based on TLS 1.3, and an HTTP application layer mapping, which is now called HTTP/3.

Let’s focus on the transport and recovery layer first. This layer provides a basis for what is sent on the wire (the packet binary format) and how we send it reliably. It includes how to open the connection, how to handshake a new secure session with the help of TLS, how to send data reliably and how to react when there is packet loss or reordering of packets. Also it includes flow control and congestion control to interact well with other transport protocols in the same network. With confidence in the basic transport and recovery layer,  we can take a look at higher application layers such as HTTP/3.

To develop such a transport protocol, we need multiple stages of the development environment. Since this is a network protocol, it’s best to test in an actual physical network to see how works on the wire. We may start the development using localhost, but after some time we may want to send and receive packets with other hosts. We can build a lab with a couple of virtual machines, using Virtualbox, VMWare or even with Docker. We also have a local testing environment with a Linux VM. But sometimes these have a limited network (localhost only) or are noisy due to other processes in the same host or virtual machines.

Next step is to have a test lab, typically an isolated network focused on protocol analysis only consisting of dedicated x86 hosts. Lab configuration is particularly important for testing various cases - there is no one-size-fits-all scenario for protocol testing. For example, EDGE is still running in production mobile networks but LTE is dominant and 5G deployment is in early stages. WiFi is very common these days. We want to test our protocol in all those environments. Of course, we can't buy every type of machine or have a very expensive network simulator for every type of environment, so using cheap hardware and an open source OS where we can configure similar environments is ideal.

The QUIC Protocol Testing lab

The goal of the QUIC testing lab is to aid transport layer protocol development. To develop a transport protocol we need to have a way to control our network environment and a way to get as many different types of debugging data as possible. Also we need to get metrics for comparison with other protocols in production.

The QUIC Testing Lab has the following goals:

  • Help with multiple transport protocol development: Developing a new transport layer requires many iterations, from building and validating packets as per protocol spec, to making sure everything works fine under moderate load, to very harsh conditions such as low bandwidth and high packet loss. We need a way to run tests with various network conditions reproducibly in order to catch unexpected issues.
  • Debugging multiple transport protocol development: Recording as much debugging info as we can is important for fixing bugs. Looking into packet captures definitely helps but we also need a detailed debugging log of the server and client to understand the what and why for each packet. For example, when a packet is sent, we want to know why. Is this because there is an application which wants to send some data? Or is this a retransmit of data previously known as lost? Or is this a loss probe which is not an actual packet loss but sent to see if the network is lossy?
  • Performance comparison between each protocol: We want to understand the performance of a new protocol by comparison with existing protocols such as TCP, or with a previous version of the protocol under development. Also we want to test with varying parameters such as changing the congestion control mechanism, changing various timeouts, or changing the buffer sizes at various levels of the stack.
  • Finding a bottleneck or errors easily: Running tests we may see an unexpected error - a transfer that timed out, or ended with an error, or a transfer was corrupted at the client side - each test needs to make sure every test is run correctly, by using a checksum of the original file to compare with what is actually downloaded, or by checking various error codes at the protocol of API level.

When we have a test lab with separate hardware, we have benefits, as follows:

  • Can configure the testing lab without public Internet access - safe and quiet.
  • Handy access to hardware and its console for maintenance purpose, or for adding or updating hardware.
  • Try other CPU architectures. For clients we use the Raspberry Pi for regular testing because this is ARM architecture (32bit or 64bit), similar to modern smartphones. So testing with ARM architecture helps for compatibility testing before going into a smartphone OS.
  • We can add a real smartphone for testing, such as Android or iPhone. We can test with WiFi but these devices also support Ethernet, so we can test them with a wired network for better consistency.
Lab Configuration

Here is a diagram of our QUIC Protocol Testing Lab:

A cost-effective and extensible testbed for transport protocol development

This is a conceptual diagram and we need to configure a switch for connecting each machine. Currently, we have Raspberry Pis (2 and 3) as an Origin and a Client. And small Intel x86 boxes for the Traffic Shaper and Edge server plus Ethernet switches for interconnectivity.

  • Origin is simply serving HTTP and HTTPS test objects using a web server. Client may download a file from Origin directly to simulate a download direct from a customer's origin server.
  • Client will download a test object from Origin or Edge, using a different protocol. In typical a configuration Client connects to Edge instead of Origin, so this is to simulate an edge server in the real world. For TCP/HTTP we are using the curl command line client and for QUIC, quiche’s http3_client with some modification.
  • Edge is running Cloudflare's web server to serve HTTP/HTTPS via TCP and also the QUIC protocol using quiche. Edge server is installed with the same Linux kernel used on Cloudflare's production machines in order to have the same low level network stack.
  • Traffic Shaper is sitting between Client and Edge (and Origin), controlling network conditions. Currently we are using FreeBSD and ipfw + dummynet. Traffic shaping can also be done using Linux' netem which provides additional network simulation features.

The goal is to run tests with various network conditions, such as bandwidth, latency and packet loss upstream and downstream. The lab is able to run a plaintext HTTP test but currently our focus of testing is HTTPS over TCP and HTTP/3 over QUIC. Since QUIC is running over UDP, both TCP and UDP traffic need to be controlled.

Test Automation and Visualization

In the lab, we have a script installed in Client, which can run a batch of testing with various configuration parameters - for each test combination, we can define a test configuration, including:

  • Network Condition - Bandwidth, Latency, Packet Loss (upstream and downstream)

For example using netem traffic shaper we can simulate LTE network as below,(RTT=50ms, BW=22Mbps upstream and downstream, with BDP queue size)

$ tc qdisc add dev eth0 root handle 1:0 netem delay 25ms $ tc qdisc add dev eth0 parent 1:1 handle 10: tbf rate 22mbit buffer 68750 limit 70000
  • Test Object sizes - 1KB, 8KB, … 32MB
  • Test Protocols: HTTPS (TCP) and QUIC (UDP)
  • Number of runs and number of requests in a single connection

The test script outputs a CSV file of results for importing into other tools for data processing and visualization - such as Google Sheets, Excel or even a jupyter notebook. Also it’s able to post the result to a database (Clickhouse in our case), so we can query and visualize the results.

Sometimes a whole test combination takes a long time - the current standard test set with simulated 2G, 3G, LTE, WiFi and various object sizes repeated 10 times for each request may take several hours to run. Large object testing on a slow network takes most of the time, so sometimes we also need to run a limited test (e.g. testing LTE-like conditions only for a sanity check) for quick debugging.

Chart using Google Sheets:

The following comparison chart shows the total transfer time in msec for TCP vs QUIC for different network conditions. The QUIC protocol used here is a development version one.

A cost-effective and extensible testbed for transport protocol developmentDebugging and performance analysis using of a smartphone

Mobile devices have become a crucial part of our day to day life, so testing the new transport protocol on mobile devices is critically important for mobile app performance. To facilitate that, we need to have a mobile test app which will proxy data over the new transport protocol under development. With this we have the ability to analyze protocol functionality and performance in mobile devices with different network conditions.

Adding a smartphone to the testbed mentioned above gives an advantage in terms of understanding real performance issues. The major smartphone operating systems, iOS and Android, have quite different networking stack. Adding a smartphone to testbed gives the ability to understand these operating system network stacks in depth which aides new protocol designs.

A cost-effective and extensible testbed for transport protocol development

The above figure shows the network block diagram of another similar lab testbed used for protocol testing where a smartphone is connected both wired and wirelessly. A Linux netem based traffic shaper sits in-between the client and server shaping the traffic. Various networking profiles are fed to the traffic shaper to mimic real world scenarios. The client can be either an Android or iOS based smartphone, the server is a vanilla web server serving static files. Client, server and traffic shaper are all connected to the Internet along with the private lab network for management purposes.

The above lab has mobile devices for both Android or iOS  installed with a test app built with a proprietary client proxy software for proxying data over the new transport protocol under development. The test app also has the ability to make HTTP requests over TCP for comparison purposes.

The Android or iOS test app can be used to issue multiple HTTPS requests of different object sizes sequentially and concurrently using TCP and QUIC as underlying transport protocol. Later, TTOTAL (total transfer time) of each HTTPS request is used to compare TCP and QUIC performance over different network conditions. One such comparison is shown below,

A cost-effective and extensible testbed for transport protocol development

The table above shows the total transfer time taken for TCP and QUIC requests over an LTE network profile fetching different objects with different concurrency levels using the test app. Here TCP goes over native OS network stack and QUIC goes over Cloudflare QUIC stack.

Debugging network performance issues is hard when it comes to mobile devices. By adding an actual smartphone into the testbed itself we have the ability to take packet captures at different layers. These are very critical in analyzing and understanding protocol performance.

It's easy and straightforward to capture packets and analyze them using the tcpdump tool on x86 boxes, but it's a challenge to capture packets on iOS and Android devices. On iOS device ‘rvictl’ lets us capture packets on an external interface. But ‘rvictl’ has some drawbacks such as timestamps being inaccurate. Since we are dealing with millisecond level events, timestamps need to be accurate to analyze the root cause of a problem.

We can capture packets on internal loopback interfaces on jailbroken iPhones and rooted Android devices. Jailbreaking a recent iOS device is nontrivial. We also need to make sure that autoupdate of any sort is disabled on such a phone otherwise it would disable the jailbreak and you have to start the whole process again. With a jailbroken phone we have root access to the device which lets us take packet captures as needed using tcpdump.

Packet captures taken using jailbroken iOS devices or rooted Android devices connected to the lab testbed help us analyze  performance bottlenecks and improve protocol performance.

iOS and Android devices different network stacks in their core operating systems. These packet captures also help us understand the network stack of these mobile devices, for example in iOS devices packets punted through loopback interface had a mysterious delay of 5 to 7ms.

Conclusion

Cloudflare is actively involved in helping to drive forward the QUIC and HTTP/3 standards by testing and optimizing these new protocols in simulated real world environments. By simulating a wide variety of networks we are working on our mission of Helping Build a Better Internet. For everyone, everywhere.

Would like to thank SangJo Lee, Hiren Panchasara, Lucas Pardue and Sreeni Tellakula for their contributions.

Categories: Technology

Helping mitigate the Citrix NetScaler CVE with Cloudflare Access

Sun, 12/01/2020 - 18:11
Helping mitigate the Citrix NetScaler CVE with Cloudflare Access

Yesterday, Citrix sent an updated notification to customers warning of a vulnerability in their Application Delivery Controller (ADC) product. If exploited, malicious attackers can bypass the login page of the administrator portal, without authentication, to perform arbitrary code execution.

No patch is available yet. Citrix expects to have a fix for certain versions on January 20 and others at the end of the month.

In the interim, Citrix has asked customers to attempt to mitigate the vulnerability. The recommended steps involve running a number of commands from an administrator command line interface.

The vulnerability relies on attackers must first be able to reach a login portal hosted by the ADC. Cloudflare can help teams secure that page and the resources protected by the ADC. Teams can place the login page, as well as the administration interface, behind Cloudflare Access’ identity proxy to prevent unauthenticated users from making requests to the portal.

Exploiting URL paths

Citrix ADC, also known as Citrix NetScaler, is an application delivery controller that provides Layer 3 through Layer 7 security for applications and APIs. Once deployed, administrators manage the installation of the ADC through a portal available at a dedicated URL on a hostname they control.

Users and administrators can reach the ADC interface over multiple protocols, but it appears that the vulnerability stems from HTTP paths that contain “/vpn/../vpns/” in the path via the VPN or AAA endpoints, from which a directory traversal exploit is possible.

The suggested mitigation steps ask customers to run commands which enforce new responder policies for the ADC interface. Those policies return 403s when certain paths are requested, blocking unauthenticated users from reaching directories that sit behind the authentication flow.

Protecting administrator portals with Cloudflare Access

To exploit this vulnerability, attackers must first be able to reach a login portal hosted by the ADC. As part of a defense-in-depth strategy, Cloudflare Access can prevent attackers from ever reaching the panel over HTTP or SSH.

Cloudflare Access, part of Cloudflare for Teams, protects internally managed resources by checking each request for identity and permission. When administrators secure an application behind Access, any request to the hostname of that application stops at Cloudflare’s network first. Once there, Cloudflare Access checks the request against the list of users who have permission to reach the application.

Deploying Access does not require exposing new holes in corporate firewalls. Teams connect their resources through a secure outbound connection, Argo Tunnel, which runs in your infrastructure to connect the applications and machines to Cloudflare. That tunnel makes outbound-only calls to the Cloudflare network and organizations can replace complex firewall rules with just one: disable all inbound connections.

To defend against attackers addressing IPs directly, Argo Tunnel can help secure the interface and force outbound requests through Cloudflare Access. With Argo Tunnel, and firewall rules preventing inbound traffic, no request can reach those IPs without first hitting Cloudflare, where Access can evaluate the request for authentication.

Administrators then build rules to decide who should authenticate to and reach the tools protected by Access. Whether those resources are virtual machines powering business operations or internal web applications, like Jira or iManage, when a user needs to connect, they pass through Cloudflare first.

When users need to connect to the tools behind Access, they are prompted to authenticate with their team’s SSO and, if valid, instantly connected to the application without being slowed down. Internally managed apps suddenly feel like SaaS products, and the login experience is seamless and familiar.

Behind the scenes, every request made to those internal tools hits Cloudflare first where we enforce identity-based policies. Access evaluates and logs every request to those apps for identity, giving administrators more visibility and security than a traditional VPN.

Helping mitigate the Citrix NetScaler CVE with Cloudflare Access

Cloudflare Access can also be bundled with the Cloudflare WAF, and WAF rules can be applied to guard against this as well. Adding Cloudflare Access, the Cloudflare WAF, and the mitigation commands from Citrix together provide layers of security while a patch is in development.

How to get started

We recommend that users of the Citrix ADC follow the mitigation steps recommended by Citrix. Cloudflare Access adds another layer of security by enforcing identity-based authentication for requests made over HTTP and SSH to the ADC interface. Together, these steps can help form a defense-in-depth strategy until a patch is released by Citrix.

To get started, Citrix ADC users can place their ADC interface and exposed endpoints behind a bastion host secured by Cloudflare Access. On that bastion host, administrators can use Cloudflare Argo Tunnel to open outbound-only connections to Cloudflare through which HTTP and SSH requests can be proxied.

Once deployed, users of the login portal can connect to the protected hostname. Cloudflare Access will prompt them to login with their identity provider and Cloudflare will validate the user against the rules created to control who can reach the interface. If authenticated and allowed, the user will be able to connect. No other requests will be able to reach the interface over HTTP or SSH without authentication.
The first five seats of Cloudflare Access are free. Teams can sign up here to get started.

Categories: Technology

Accelerating UDP packet transmission for QUIC

Wed, 08/01/2020 - 17:08
Accelerating UDP packet transmission for QUIC

This was originally published on Perf Planet's 2019 Web Performance Calendar.

QUIC, the new Internet transport protocol designed to accelerate HTTP traffic, is delivered on top of UDP datagrams, to ease deployment and avoid interference from network appliances that drop packets from unknown protocols. This also allows QUIC implementations to live in user-space, so that, for example, browsers will be able to implement new protocol features and ship them to their users without having to wait for operating systems updates.

But while a lot of work has gone into optimizing TCP implementations as much as possible over the years, including building offloading capabilities in both software (like in operating systems) and hardware (like in network interfaces), UDP hasn't received quite as much attention as TCP, which puts QUIC at a disadvantage. In this post we'll look at a few tricks that help mitigate this disadvantage for UDP, and by association QUIC.

For the purpose of this blog post we will only be concentrating on measuring throughput of QUIC connections, which, while necessary, is not enough to paint an accurate overall picture of the performance of the QUIC protocol (or its implementations) as a whole.

Test Environment

The client used in the measurements is h2load, built with QUIC and HTTP/3 support, while the server is NGINX, built with the open-source QUIC and HTTP/3 module provided by Cloudflare which is based on quiche (github.com/cloudflare/quiche), Cloudflare's own open-source implementation of QUIC and HTTP/3.

The client and server are run on the same host (my laptop) running Linux 5.3, so the numbers don’t necessarily reflect what one would see in a production environment over a real network, but it should still be interesting to see how much of an impact each of the techniques have.

Baseline

Currently the code that implements QUIC in NGINX uses the sendmsg() system call to send a single UDP packet at a time.

ssize_t sendmsg(int sockfd, const struct msghdr *msg, int flags);

The struct msghdr carries a struct iovec which can in turn carry multiple buffers. However, all of the buffers within a single iovec will be merged together into a single UDP datagram during transmission. The kernel will then take care of encapsulating the buffer in a UDP packet and sending it over the wire.

Accelerating UDP packet transmission for QUIC

The throughput of this particular implementation tops out at around 80-90 MB/s, as measured by h2load when performing 10 sequential requests for a 100 MB resource.

Accelerating UDP packet transmission for QUIC

sendmmsg()

Due to the fact that sendmsg() only sends a single UDP packet at a time, it needs to be invoked quite a lot in order to transmit all of the QUIC packets required to deliver the requested resources, as illustrated by the following bpftrace command:

% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }' Attaching 2 probes... @[tracepoint:syscalls:sys_enter_sendmsg]: 904539

Each of those system calls causes an expensive context switch between the application and the kernel, thus impacting throughput.

But while sendmsg() only transmits a single UDP packet at a time for each invocation, its close cousin sendmmsg() (note the additional “m” in the name) is able to batch multiple packets per system call:

int sendmmsg(int sockfd, struct mmsghdr *msgvec, unsigned int vlen, int flags);

Multiple struct mmsghdr structures can be passed to the kernel as an array, each in turn carrying a single struct msghdr with its own struct iovec , with each element in the msgvec array representing a single UDP datagram.

Accelerating UDP packet transmission for QUIC

Let's see what happens when NGINX is updated to use sendmmsg() to send QUIC packets:

% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }' Attaching 2 probes... @[tracepoint:syscalls:sys_enter_sendmsg]: 2437 @[tracepoint:syscalls:sys_enter_sendmmsg]: 15676

The number of system calls went down dramatically, which translates into an increase in throughput, though not quite as big as the decrease in syscalls:

Accelerating UDP packet transmission for QUIC

UDP segmentation offload

With sendmsg() as well as sendmmsg(), the application is responsible for separating each QUIC packet into its own buffer in order for the kernel to be able to transmit it. While the implementation in NGINX uses static buffers to implement this, so there is no overhead in allocating them, all of these buffers need to be traversed by the kernel during transmission, which can add significant overhead.

Linux supports a feature, Generic Segmentation Offload (GSO), which allows the application to pass a single "super buffer" to the kernel, which will then take care of segmenting it into smaller packets. The kernel will try to postpone the segmentation as much as possible to reduce the overhead of traversing outgoing buffers (some NICs even support hardware segmentation, but it was not tested in this experiment due to lack of capable hardware). Originally GSO was only supported for TCP, but support for UDP GSO was recently added as well, in Linux 4.18.

This feature can be controlled using the UDP_SEGMENT socket option:

setsockopt(fd, SOL_UDP, UDP_SEGMENT, &gso_size, sizeof(gso_size)))

As well as via ancillary data, to control segmentation for each sendmsg() call:

cm = CMSG_FIRSTHDR(&msg); cm->cmsg_level = SOL_UDP; cm->cmsg_type = UDP_SEGMENT; cm->cmsg_len = CMSG_LEN(sizeof(uint16_t)); *((uint16_t *) CMSG_DATA(cm)) = gso_size;

Where gso_size is the size of each segment that form the "super buffer" passed to the kernel from the application. Once configured, the application can provide one contiguous large buffer containing a number of packets of gso_size length (as well as a final smaller packet), that will then be segmented by the kernel (or the NIC if hardware segmentation offloading is supported and enabled).

Accelerating UDP packet transmission for QUIC

Up to 64 segments can be batched with the UDP_SEGMENT option.

GSO with plain sendmsg() already delivers a significant improvement:

Accelerating UDP packet transmission for QUIC

And indeed the number of syscalls also went down significantly, compared to plain sendmsg() :

% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }' Attaching 2 probes... @[tracepoint:syscalls:sys_enter_sendmsg]: 18824

GSO can also be combined with sendmmsg() to deliver an even bigger improvement. The idea being that each struct msghdr can be segmented in the kernel by setting the UDP_SEGMENT option using ancillary data, allowing an application to pass multiple “super buffers”, each carrying up to 64 segments, to the kernel in a single system call.

The improvement is again fairly significant:

Accelerating UDP packet transmission for QUIC

Evolving from AFAP

Transmitting packets as fast as possible is easy to reason about, and there's much fun to be had in optimizing applications for that, but in practice this is not always the best strategy when optimizing protocols for the Internet

Bursty traffic is more likely to cause or be affected by congestion on any given network path, which will inevitably defeat any optimization implemented to increase transmission rates.

Packet pacing is an effective technique to squeeze out more performance from a network flow. The idea being that adding a short delay between each outgoing packet will smooth out bursty traffic and reduce the chance of congestion, and packet loss. For TCP this was originally implemented in Linux via the fq packet scheduler, and later by the BBR congestion control algorithm implementation, which implements its own pacer.

Accelerating UDP packet transmission for QUIC

Due to the nature of current QUIC implementations, which reside entirely in user-space, pacing of QUIC packets conflicts with any of the techniques explored in this post, because pacing each packet separately during transmission will prevent any batching on the application side, and in turn batching will prevent pacing, as batched packets will be transmitted as fast as possible once received by the kernel.

However Linux provides some facilities to offload the pacing to the kernel and give back some control to the application:

  • SO_MAX_PACING_RATE: an application can define this socket option to instruct the fq packet scheduler to pace outgoing packets up to the given rate. This works for UDP sockets as well, but it is yet to be seen how this can be integrated with QUIC, as a single UDP socket can be used for multiple QUIC connections (unlike TCP, where each connection has its own socket). In addition, this is not very flexible, and might not be ideal when implementing the BBR pacer.
  • SO_TXTIME / SCM_TXTIME: an application can use these options to schedule transmission of specific packets at specific times, essentially instructing fq to delay packets until the provided timestamp is reached. This gives the application a lot more control, and can be easily integrated into sendmsg() as well as sendmmsg(). But it does not yet support specifying different times for each packet when GSO is used, as there is no way to define multiple timestamps for packets that need to be segmented (each segmented packet essentially ends up being sent at the same time anyway).

While the performance gains achieved by using the techniques illustrated here are fairly significant, there are still open questions around how any of this will work with pacing, so more experimentation is required.

Categories: Technology

Prototyping optimizations with Cloudflare Workers and WebPageTest

Wed, 08/01/2020 - 15:00
Prototyping optimizations with Cloudflare Workers and WebPageTest

This article was originally published as part of  Perf Planet's 2019 Web Performance Calendar.

Have you ever wanted to quickly test a new performance idea, or see if the latest performance wisdom is beneficial to your site? As web performance appears to be a stochastic process, it is really important to be able to iterate quickly and review the effects of different experiments. The challenge is to be able to arbitrarily change requests and responses without the overhead of setting up another internet facing server. This can be straightforward to implement by combining two of my favourite technologies : WebPageTest and Cloudflare Workers. Pat Meenan sums this up with the following slide from a recent getting the most of WebPageTest presentation:

Prototyping optimizations with Cloudflare Workers and WebPageTest

So what is Cloudflare Workers and why is it ideally suited to easy prototyping of optimizations?

Cloudflare Workers

From the documentation :

Cloudflare Workers provides a lightweight JavaScript execution environment that allows developers to augment existing applications or create entirely new ones without configuring or maintaining infrastructure.A Cloudflare Worker is a programmable proxy which brings the simplicity and flexibility of the Service Workers event-based fetch API from the browser to the edge. This allows a worker to intercept and modify requests and responses.

Prototyping optimizations with Cloudflare Workers and WebPageTest

With the Service Worker API you can add an EventListener to any fetch event that is routed through the worker script and modify the request to come from a different origin.

Cloudflare Workers also provides a streaming HTMLRewriter to enable on the fly modification of HTML as it passes through the worker. The streaming nature of this parser ensures latency is minimised as the entire HTML document does not have to be buffered before rewriting can happen.

Setting up a worker

It is really quick and easy to sign up for a free subdomain at workers.dev which provides you with 100,000 free requests per day. There is a quick-start guide available here.To be able to run the examples in this post you will need to install Wrangler, the CLI tool for deploying workers. Once Wrangler is installed run the following command to download the example worker project:    

wrangler generate wpt-proxy https://github.com/xtuc/WebPageTest-proxy

You will then need to update the wrangler.toml with your account_id, which can be found in the dashboard in the right sidebar. Then configure an API key with the command:

wrangler config

Finally, you can publish the worker with:  

wrangler publish

At this the point, the worker will be active at

https://wpt-proxy.<your-subdomain>.workers.dev.

WebPageTest OverrideHost  

Now that your worker is configured, the next step is to configure WebPageTest to redirect requests through the worker. WebPageTest has a feature where it can re-point arbitrary origins to a different domain. To access the feature in WebPageTest, you need to use the WebPageTest scripting language "overrideHost" command, as shown:

Prototyping optimizations with Cloudflare Workers and WebPageTest

This example will redirect all network requests to www.bbc.co.uk to wpt-proxy.prf.workers.dev instead. WebPageTest also adds an x-host header to each redirected request so that the destination can determine for which host the request was originally intended:    

x-host: www.bbc.co.uk

The script can process multiple overrideHost commands to override multiple different origins. If HTTPS is used, WebPageTest can use HTTP/2 and benefit from connection coalescing:  

overrideHost www.bbc.co.uk wpt-proxy.prf.workers.dev overrideHost nav.files.bbci.co.uk wpt-proxy.prf.workers.dev navigate https://www.bbc.co.uk

 It also supports wildcards:  

overrideHost *bbc.co.uk wpt-proxy.prf.workers.dev navigate https://www.bbc.co.uk

There are a few special strings that can be used in a script when bulk testing, so a single script can be re-used across multiple URLs:

  • %URL% - Replaces with the URL of the current test
  • %HOST% - Replaces with the hostname of the URL of the current test
  • %HOSTR% - Replaces with the hostname of the final URL in case the test URL does a redirect.

A more generic script would look like this:    

overrideHost %HOSTR% wpt-proxy.prf.workers.dev navigate %URL% Basic worker

In the base example below, the worker listens for the fetch event, looks for the x-host header that WebPageTest has set and responds by fetching the content from the orginal url:

/* * Handle all requests. * Proxy requests with an x-host header and return 403 * for everything else */ addEventListener("fetch", event => { const host = event.request.headers.get('x-host'); if (host) { const url = new URL(event.request.url); const originUrl = url.protocol + '//' + host + url.pathname + url.search; let init = { method: event.request.method, redirect: "manual", headers: [...event.request.headers] }; event.respondWith(fetch(originUrl, init)); } else { const response = new Response('x-Host headers missing', {status: 403}); event.respondWith(response); } });

The source code can be found here and instructions to download and deploy this worker are described in the earlier section.

So what happens if we point all the domains on the BBC website through this worker, using the following config:  

overrideHost *bbci.co.uk wpt.prf.workers.dev overrideHost *bbc.co.uk wpt.prf.workers.dev navigate https://www.bbc.co.uk

configured to a 3G Fast setting from a UK test location.

Prototyping optimizations with Cloudflare Workers and WebPageTestComparison of BBC website if when using a single connection. 

Before After Prototyping optimizations with Cloudflare Workers and WebPageTest Prototyping optimizations with Cloudflare Workers and WebPageTest

The potential performance improvement of loading a page over a single connection, eliminating the additional DNS lookup, TCP connection and TLS handshakes, can be seen  by comparing the filmstrips and waterfalls. There are several reasons why you may not want or be able to move everything to a single domain, but at least it is now easy to see what the performance difference would be.  

HTMLRewriter

With the HTMLRewriter, it is possible to change the HTML response as it passes through the worker. A jQuery-like syntax provides CSS-selector matching and a standard set of DOM mutation methods. For instance you could rewrite your page to measure the effects of different preload/prefetch strategies, review the performance savings of removing or using different third-party scripts, or you could stock-take the HEAD of your document. One piece of performance advice is to self-host some third-party scripts. This example script invokes the HTMLRewriter to listen for a script tag with a src attribute. If the script is from a proxiable domain the src is rewritten to be first-party, with a specific path prefix.

async function rewritePage(request) { const response = await fetch(request); return new HTMLRewriter() .on("script[src]", { element: el => { let src = el.getAttribute("src"); if (PROXIED_URL_PREFIXES_RE.test(src)) { el.setAttribute("src", createProxiedScriptUrl(src)); } } }) .transform(response); }

Subsequently, when the browser makes a request with the specific prefix, the worker fetches the asset from the original URL. This example can be downloaded with this command:    

wrangler generate test https://github.com/xtuc/rewrite-3d-party.git

Request Mangling

As well as rewriting content, it is also possible to change or delay a request. Below is an example of how to randomly add a delay of a second to a request:

addEventListener("fetch", event => { const host = event.request.headers.get('x-host'); if (host) { //.... // Add the delay if necessary if (Math.random() * 100 < DELAY_PERCENT) { await new Promise(resolve => setTimeout(resolve, DELAY_MS)); } event.respondWith(fetch(originUrl, init)); //... }HTTP/2 prioritization

What if you want to see what the effect of changing the HTTP/2 prioritization of assets would make to your website? Cloudflare Workers provide custom http2 prioritization schemes that can be applied by setting a custom header on the response. The cf-priority header is defined as <priority>/<concurrency> so adding:    

response.headers.set('cf-priority', “30/0”);    

would set the priority of that response to 30 with a concurrency of 0 for the given response. Similarly, “30/1” would set concurrency to 1 and “30/n” would set concurrency to n. With this flexibility, you can prioritize the bytes that are important for your website or run a bulk test to prove that your new  prioritization scheme is better than any of the existing browser implementations.

Summary

A major barrier to understanding and innovation, is the amount of time is takes to get feedback. Having a quick and easy framework, to try out a new idea and comprehend the impact, is key. I hope this post has convinced you that combining WebPageTest and Cloudflare Workers is an easy solution to this problem and is indeed magic

Categories: Technology

Introducing Cloudflare for Teams

Tue, 07/01/2020 - 14:00
Introducing Cloudflare for Teams

Ten years ago, when Cloudflare was created, the Internet was a place that people visited. People still talked about ‘surfing the web’ and the iPhone was less than two years old, but on July 4, 2009 large scale DDoS attacks were launched against websites in the US and South Korea.

Those attacks highlighted how fragile the Internet was and how all of us were becoming dependent on access to the web as part of our daily lives.

Fast forward ten years and the speed, reliability and safety of the Internet is paramount as our private and work lives depend on it.

We started Cloudflare to solve one half of every IT organization's challenge: how do you ensure the resources and infrastructure that you expose to the Internet are safe from attack, fast, and reliable. We saw that the world was moving away from hardware and software to solve these problems and instead wanted a scalable service that would work around the world.

To deliver that, we built one of the world's largest networks. Today our network spans more than 200 cities worldwide and is within milliseconds of nearly everyone connected to the Internet. We have built the capacity to stand up to nation-state scale cyberattacks and a threat intelligence system powered by the immense amount of Internet traffic that we see.

Introducing Cloudflare for Teams

Today we're expanding Cloudflare's product offerings to solve the other half of every IT organization's challenge: ensuring the people and teams within an organization can access the tools they need to do their job and are safe from malware and other online threats.

The speed, reliability, and protection we’ve brought to public infrastructure is extended today to everything your team does on the Internet.

In addition to protecting an organization's infrastructure, IT organizations are charged with ensuring that employees of an organization can access the tools they need safely. Traditionally, these problems would be solved by hardware products like VPNs and Firewalls. VPNs let authorized users access the tools they needed and Firewalls kept malware out.

Castle and MoatIntroducing Cloudflare for Teams

The dominant model was the idea of a castle and a moat. You put all your valuable assets inside the castle. Your Firewall created the moat around the castle to keep anything malicious out. When you needed to let someone in, a VPN acted as the drawbridge over the moat.

This is still the model most businesses use today, but it's showing its age. The first challenge is that if an attacker is able to find its way over the moat and into the castle then it can cause significant damage. Unfortunately, few weeks go by without reading a news story about how an organization had significant data compromised because an employee fell for a phishing email, or a contractor was compromised, or someone was able to sneak into an office and plug in a rogue device.

The second challenge of the model is the rise of cloud and SaaS. Increasingly an organization's resources aren't in the just one castle anymore, but instead in different public cloud and SaaS vendors.

Services like Box, for instance, provide better storage and collaboration tools than most organizations could ever hope to build and manage themselves. But there's literally nowhere you can ship a hardware box to Box in order to build your own moat around their SaaS castle. Box provides some great security tools themselves, but they are different from the tools provided by every other SaaS and public cloud vendor. Where IT organizations used to try to have a single pane of glass with a complex mess of hardware to see who was getting stopped by their moats and who was crossing their drawbridges, SaaS and cloud make that visibility increasingly difficult.

The third challenge to the traditional castle and moat strategy of IT is the rise of mobile. Where once upon a time your employees would all show up to work in your castle, now people are working around the world. Requiring everyone to login to a limited number of central VPNs becomes obviously absurd when you picture it as villagers having to sprint back from wherever they are across a drawbridge whenever they want to get work done. It's no wonder VPN support is one of the top IT organization tickets and likely always will be for organizations that maintain a castle and moat approach.

Introducing Cloudflare for Teams

But it's worse than that. Mobile has also introduced a culture where employees bring their own devices to work. Or, even if on a company-managed device, work from the road or home — beyond the protected walls of the castle and without the security provided by a moat.

If you'd looked at how we managed our own IT systems at Cloudflare four years ago, you'd have seen us following this same model. We used firewalls to keep threats out and required every employee to login through our VPN to get their work done. Personally, as someone who travels extensively for my job, it was especially painful.

Regularly, someone would send me a link to an internal wiki article asking for my input. I'd almost certainly be working from my mobile phone in the back of a cab running between meetings. I'd try and access the link and be prompted to login to our VPN in San Francisco. That's when the frustration would start.

Corporate mobile VPN clients, in my experience, all seem to be powered by some 100-sided die that only will allow you to connect if the number of miles you are from your home office is less than 25 times whatever number is rolled. Much frustration, and several IT tickets later, with a little luck I may be able to connect. And, even then, the experience was horribly slow and unreliable.

When we audited our own system, we found that the frustration with the process had caused multiple teams to create work arounds that were, effectively, unauthorized drawbridges over our carefully constructed moat. And, as we increasingly adopted SaaS tools like Salesforce and Workday, we lost much visibility into how these tools were being used.

Around the same time we were realizing the traditional approach to IT security was untenable for an organization like Cloudflare, Google published their paper titled "BeyondCorp: A New Approach to Enterprise Security." The core idea was that a company's intranet should be no more trusted than the Internet. And, rather than the perimeter being enforced by a singular moat, instead each application and data source should authenticate the individual and device each time it is accessed.

The BeyondCorp idea, which has come to be known as a ZeroTrust model for IT security, was influential for how we thought about our own systems. Powerfully, because Cloudflare had a flexible global network, we were able to use it both to enforce policies as our team accessed tools as well as to protect ourselves from malware as we did our jobs.

Cloudflare for Teams

Today, we're excited to announce Cloudflare for Teams™: the suite of tools we built to protect ourselves, now available to help any IT organization, from the smallest to the largest.

Cloudflare for Teams is built around two complementary products: Access and Gateway. Cloudflare Access™ is the modern VPN — a way to ensure your team members get fast access to the resources they need to do their job while keeping threats out. Cloudflare Gateway™ is the modern Next Generation Firewall — a way to ensure that your team members are protected from malware and follow your organization's policies wherever they go online.

Powerfully, both Cloudflare Access and Cloudflare Gateway are built atop the existing Cloudflare network. That means they are fast, reliable, scalable to the largest organizations, DDoS resistant, and located everywhere your team members are today and wherever they may travel. Have a senior executive going on a photo safari to see giraffes in Kenya, gorillas in Rwanda, and lemurs in Madagascar — don't worry, we have Cloudflare data centers in all those countries (and many more) and they all support Cloudflare for Teams.

Introducing Cloudflare for Teams

All Cloudflare for Teams products are informed by the threat intelligence we see across all of Cloudflare's products. We see such a large diversity of Internet traffic that we often see new threats and malware before anyone else. We've supplemented our own proprietary data with additional data sources from leading security vendors, ensuring Cloudflare for Teams provides a broad set of protections against malware and other online threats.

Moreover, because Cloudflare for Teams runs atop the same network we built for our infrastructure protection products, we can deliver them very efficiently. That means that we can offer these products to our customers at extremely competitive prices. Our goal is to make the return on investment (ROI) for all Cloudflare for Teams customers nothing short of a no brainer. If you’re considering another solution, contact us before you decide.

Both Cloudflare Access and Cloudflare Gateway also build off products we've launched and battle tested already. For example, Gateway builds, in part, off our 1.1.1.1 Public DNS resolver. Today, more than 40 million people trust 1.1.1.1 as the fastest public DNS resolver globally. By adding malware scanning, we were able to create our entry-level Cloudflare Gateway product.

Cloudflare Access and Cloudflare Gateway build off our WARP and WARP+ products. We intentionally built a consumer mobile VPN service because we knew it would be hard. The millions of WARP and WARP+ users who have put the product through its paces have ensured that it's ready for the enterprise. That we have 4.5 stars across more than 200,000 ratings, just on iOS, is a testament of how reliable the underlying WARP and WARP+ engines have become. Compare that with the ratings of any corporate mobile VPN client, which are unsurprisingly abysmal.

We’ve partnered with some incredible organizations to create the ecosystem around Cloudflare for Teams. These include endpoint security solutions including VMWare Carbon Black, Malwarebytes, and Tanium. SEIM and analytics solutions including Datadog, Sumo Logic, and Splunk. Identity platforms including Okta, OneLogin, and Ping Identity. Feedback from these partners and more is at the end of this post.

If you’re curious about more of the technical details about Cloudflare for Teams, I encourage you to read Sam Rhea’s post.

Serving Everyone

Cloudflare has always believed in the power of serving everyone. That’s why we’ve offered a free version of Cloudflare for Infrastructure since we launched in 2010. That belief doesn’t change with our launch of Cloudflare for Teams. For both Cloudflare Access and Cloudflare Gateway, there will be free versions to protect individuals, home networks, and small businesses. We remember what it was like to be a startup and believe that everyone deserves to be safe online, regardless of their budget.

With both Cloudflare Access and Gateway, the products are segmented along a Good, Better, Best framework. That breaks out into Access Basic, Access Pro, and Access Enterprise. You can see the features available with each tier in the table below, including Access Enterprise features that will roll out over the coming months.

Introducing Cloudflare for Teams

We wanted a similar Good, Better, Best framework for Cloudflare Gateway. Gateway Basic can be provisioned in minutes through a simple change to your network’s recursive DNS settings. Once in place, network administrators can set rules on what domains should be allowed and filtered on the network. Cloudflare Gateway is informed both by the malware data gathered from our global sensor network as well as a rich corpus of domain categorization, allowing network operators to set whatever policy makes sense for them. Gateway Basic leverages the speed of 1.1.1.1 with granular network controls.

Gateway Pro, which we’re announcing today and you can sign up to beta test as its features roll out over the coming months, extends the DNS-provisioned protection to a full proxy. Gateway Pro can be provisioned via the WARP client — which we are extending beyond iOS and Android mobile devices to also support Windows, MacOS, and Linux — or network policies including MDM-provisioned proxy settings or GRE tunnels from office routers. This allows a network operator to filter on policies not merely by the domain but by the specific URL.

Introducing Cloudflare for TeamsBuilding the Best-in-Class Network Gateway

While Gateway Basic (provisioned via DNS) and Gateway Pro (provisioned as a proxy) made sense, we wanted to imagine what the best-in-class network gateway would be for Enterprises that valued the highest level of performance and security. As we talked to these organizations we heard an ever-present concern: just surfing the Internet created risk of unauthorized code compromising devices. With every page that every user visited, third party code (JavaScript, etc.) was being downloaded and executed on their devices.

The solution, they suggested, was to isolate the local browser from third party code and have websites render in the network. This technology is known as browser isolation. And, in theory, it’s a great idea. Unfortunately, in practice with current technology, it doesn’t perform well. The most common way the browser isolation technology works is to render the page on a server and then push a bitmap of the page down to the browser. This is known as pixel pushing. The challenge is that can be slow, bandwidth intensive, and it breaks many sophisticated web applications.

We were hopeful that we could solve some of these problems by moving the rendering of the pages to Cloudflare’s network, which would be closer to end users. So we talked with many of the leading browser isolation companies about potentially partnering. Unfortunately, as we experimented with their technologies, even with our vast network, we couldn’t overcome the sluggish feel that plagues existing browser isolation solutions.

Enter S2 SystemsIntroducing Cloudflare for Teams

That’s when we were introduced to S2 Systems. I clearly remember first trying the S2 demo because my first reaction was: “This can’t be working correctly, it’s too fast.” The S2 team had taken a different approach to browser isolation. Rather than trying to push down a bitmap of what the screen looked like, instead they pushed down the vectors to draw what’s on the screen. The result was an experience that was typically at least as fast as browsing locally and without broken pages.

The best, albeit imperfect, analogy I’ve come up with to describe the difference between S2’s technology and other browser isolation companies is the difference between WindowsXP and MacOS X when they were both launched in 2001. WindowsXP’s original graphics were based on bitmapped images. MacOS X were based on vectors. Remember the magic of watching an application “genie” in and out the MacOS X doc? Check it out in a video from the launch…

At the time watching a window slide in and out of the dock seemed like magic compared with what you could do with bitmapped user interfaces. You can hear the awe in the reaction from the audience. That awe that we’ve all gotten used to in UIs today comes from the power of vector images. And, if you’ve been underwhelmed by the pixel-pushed bitmaps of existing browser isolation technologies, just wait until you see what is possible with S2’s technology.

Introducing Cloudflare for Teams

We were so impressed with the team and the technology that we acquired the company. We will be integrating the S2 technology into Cloudflare Gateway Enterprise. The browser isolation technology will run across Cloudflare’s entire global network, bringing it within milliseconds of virtually every Internet user. You can learn more about this approach in Darren Remington's blog post.

Once the rollout is complete in the second half of 2020 we expect we will be able to offer the first full browser isolation technology that doesn’t force you to sacrifice performance. In the meantime, if you’d like a demo of the S2 technology in action, let us know.

The Promise of a Faster Internet for Everyone

Cloudflare’s mission is to help build a better Internet. With Cloudflare for Teams, we’ve extended that network to protect the people and organizations that use the Internet to do their jobs. We’re excited to help a more modern, mobile, and cloud-enabled Internet be safer and faster than it ever was with traditional hardware appliances.

But the same technology we’re deploying now to improve enterprise security holds further promise. The most interesting Internet applications keep getting more complicated and, in turn, requiring more bandwidth and processing power to use.

For those of us fortunate enough to be able to afford the latest iPhone, we continue to reap the benefits of an increasingly powerful set of Internet-enabled tools. But try and use the Internet on a mobile phone from a few generations back, and you can see how quickly the latest Internet applications leaves legacy devices behind. That’s a problem if we want to bring the next 4 billion Internet users online.

We need a paradigm shift if the sophistication of applications and complexity of interfaces continues to keep pace with the latest generation of devices. To make the best of the Internet available to everyone, we may need to shift the work of the Internet off the end devices we all carry around in our pockets and let the network — where power, bandwidth, and CPU are relatively plentiful — carry more of the load.

That’s the long term promise of what S2’s technology combined with Cloudflare’s network may someday power. If we can make it so a less expensive device can run the latest Internet applications — using less battery, bandwidth, and CPU than ever before possible — then we can make the Internet more affordable and accessible for everyone.

We started with Cloudflare for Infrastructure. Today we’re announcing Cloudflare for Teams. But our ambition is nothing short of Cloudflare for Everyone.

Early Feedback on Cloudflare for Teams from Customers and PartnersIntroducing Cloudflare for Teams

"Cloudflare Access has enabled Ziff Media Group to seamlessly and securely deliver our suite of internal tools to employees around the world on any device, without the need for complicated network configurations,” said Josh Butts, SVP Product & Technology, Ziff Media Group.

Introducing Cloudflare for Teams

“VPNs are frustrating and lead to countless wasted cycles for employees and the IT staff supporting them,” said Amod Malviya, Cofounder and CTO, Udaan. “Furthermore, conventional VPNs can lull people into a false sense of security. With Cloudflare Access, we have a far more reliable, intuitive, secure solution that operates on a per user, per access basis. I think of it as Authentication 2.0 — even 3.0”

Introducing Cloudflare for Teams

“Roman makes healthcare accessible and convenient,” said Ricky Lindenhovius, Engineering Director, Roman Health. “Part of that mission includes connecting patients to physicians, and Cloudflare helps Roman securely and conveniently connect doctors to internally managed tools. With Cloudflare, Roman can evaluate every request made to internal applications for permission and identity, while also improving speed and user experience.”

Introducing Cloudflare for Teams

“We’re excited to partner with Cloudflare to provide our customers an innovative approach to enterprise security that combines the benefits of endpoint protection and network security," said Tom Barsi, VP Business Development, VMware. "VMware Carbon Black is a leading endpoint protection platform (EPP) and offers visibility and control of laptops, servers, virtual machines, and cloud infrastructure at scale. In partnering with Cloudflare, customers will have the ability to use VMware Carbon Black’s device health as a signal in enforcing granular authentication to a team’s internally managed application via Access, Cloudflare’s Zero Trust solution. Our joint solution combines the benefits of endpoint protection and a zero trust authentication solution to keep teams working on the Internet more secure."

Introducing Cloudflare for Teams

“Rackspace is a leading global technology services company accelerating the value of the cloud during every phase of our customers’ digital transformation,” said Lisa McLin, vice president of alliances and channel chief at Rackspace. “Our partnership with Cloudflare enables us to deliver cutting edge networking performance to our customers and helps them leverage a software defined networking architecture in their journey to the cloud.”

Introducing Cloudflare for Teams

“Employees are increasingly working outside of the traditional corporate headquarters. Distributed and remote users need to connect to the Internet, but today’s security solutions often require they backhaul those connections through headquarters to have the same level of security,” said Michael Kenney, head of strategy and business development for Ingram Micro Cloud. “We’re excited to work with Cloudflare whose global network helps teams of any size reach internally managed applications and securely use the Internet, protecting the data, devices, and team members that power a business.”

Introducing Cloudflare for Teams

"At Okta, we’re on a mission to enable any organization to securely use any technology. As a leading provider of identity for the enterprise, Okta helps organizations remove the friction of managing their corporate identity for every connection and request that their users make to applications. We’re excited about our partnership with Cloudflare and bringing seamless authentication and connection to teams of any size,” said Chuck Fontana, VP, Corporate & Business Development, Okta.

Introducing Cloudflare for Teams

"Organizations need one unified place to see, secure, and manage their endpoints,” said Matt Hastings, Senior Director of Product Management at Tanium. “We are excited to partner with Cloudflare to help teams secure their data, off-network devices, and applications. Tanium’s platform provides customers with a risk-based approach to operations and security with instant visibility and control into their endpoints. Cloudflare helps extend that protection by incorporating device data to enforce security for every connection made to protected resources.”

Introducing Cloudflare for Teams

“OneLogin is happy to partner with Cloudflare to advance security teams' identity control in any environment, whether on-premise or in the cloud, without compromising user performance," said Gary Gwin, Senior Director of Product at OneLogin. "OneLogin’s identity and access management platform securely connects people and technology for every user, every app, and every device. The OneLogin and Cloudflare for Teams integration provides a comprehensive identity and network control solution for teams of all sizes.”

Introducing Cloudflare for Teams

“Ping Identity helps enterprises improve security and user experience across their digital businesses,” said Loren Russon, Vice President of Product Management, Ping Identity. “Cloudflare for Teams integrates with Ping Identity to provide a comprehensive identity and network control solution to teams of any size, and ensures that only the right people get the right access to applications, seamlessly and securely."

Introducing Cloudflare for Teams

"Our customers increasingly leverage deep observability data to address both operational and security use cases, which is why we launched Datadog Security Monitoring," said Marc Tremsal, Director of Product Management at Datadog. "Our integration with Cloudflare already provides our customers with visibility into their web and DNS traffic; we're excited to work together as Cloudflare for Teams expands this visibility to corporate environments."

Introducing Cloudflare for Teams

“As more companies support employees who work on corporate applications from outside of the office, it is vital that they understand each request users are making. They need real-time insights and intelligence to react to incidents and audit secure connections," said John Coyle, VP of Business Development, Sumo Logic. "With our partnership with Cloudflare, customers can now log every request made to internal applications and automatically push them directly to Sumo Logic for retention and analysis."

Introducing Cloudflare for Teams

“Cloudgenix is excited to partner with Cloudflare to provide an end-to-end security solution from the branch to the cloud.  As enterprises move off of expensive legacy MPLS networks and adopt branch to internet breakout policies, the CloudGenix CloudBlade platform and Cloudflare for Teams together can make this transition seamless and secure. We’re looking forward to Cloudflare’s roadmap with this announcement and partnership opportunities in the near term.” said Aaron Edwards, Field CTO, Cloudgenix.

Introducing Cloudflare for Teams

“In the face of limited cybersecurity resources, organizations are looking for highly automated solutions that work together to reduce the likelihood and impact of today’s cyber risks,” said Akshay Bhargava, Chief Product Officer, Malwarebytes. “With Malwarebytes and Cloudflare together, organizations are deploying more than twenty layers of security defense-in-depth. Using just two solutions, teams can secure their entire enterprise from device, to the network, to their internal and external applications.”

Introducing Cloudflare for Teams

"Organizations' sensitive data is vulnerable in-transit over the Internet and when it's stored at its destination in public cloud, SaaS applications and endpoints,” said Pravin Kothari, CEO of CipherCloud. “CipherCloud is excited to partner with Cloudflare to secure data in all stages, wherever it goes. Cloudflare’s global network secures data in-transit without slowing down performance. CipherCloud CASB+ provides a powerful cloud security platform with end-to-end data protection and adaptive controls for cloud environments, SaaS applications and BYOD endpoints. Working together, teams can rely on integrated Cloudflare and CipherCloud solution to keep data always protected without compromising user experience.”

Categories: Technology

Security on the Internet with Cloudflare for Teams

Tue, 07/01/2020 - 14:00
Security on the Internet with Cloudflare for TeamsSecurity on the Internet with Cloudflare for Teams

Your experience using the Internet has continued to improve over time. It’s gotten faster, safer, and more reliable. However, you probably have to use a different, worse, equivalent of it when you do your work. While the Internet kept getting better, businesses and their employees were stuck using their own private networks.

In those networks, teams hosted their own applications, stored their own data, and protected all of it by building a castle and moat around that private world. This model hid internally managed resources behind VPN appliances and on-premise firewall hardware. The experience was awful, for users and administrators alike. While the rest of the Internet became more performant and more reliable, business users were stuck in an alternate universe.

That legacy approach was less secure and slower than teams wanted, but the corporate perimeter mostly worked for a time. However, that began to fall apart with the rise of cloud-delivered applications. Businesses migrated to SaaS versions of software that previously lived in that castle and behind that moat. Users needed to connect to the public Internet to do their jobs, and attackers made the Internet unsafe in sophisticated, unpredictable ways - which opened up every business to  a new world of never-ending risks.

How did enterprise security respond? By trying to solve a new problem with a legacy solution, and forcing the Internet into equipment that was only designed for private, corporate networks. Instead of benefitting from the speed and availability of SaaS applications, users had to backhaul Internet-bound traffic through the same legacy boxes that made their private network miserable.

Teams then watched as their bandwidth bills increased. More traffic to the Internet from branch offices forced more traffic over expensive, dedicated links. Administrators now had to manage a private network and the connections to the entire Internet for their users, all with the same hardware. More traffic required more hardware and the cycle became unsustainable.

Cloudflare’s first wave of products secured and improved the speed of those sites by letting customers, from free users to some of the largest properties on the Internet, replace that hardware stack with Cloudflare’s network. We could deliver capacity at a scale that would be impossible for nearly any company to build themselves. We deployed data centers in over 200 cities around the world that help us reach users wherever they are.

We built a unique network to let sites scale how they secured infrastructure on the Internet with their own growth. But internally, businesses and their employees were stuck using their own private networks.

Just as we helped organizations secure their infrastructure by replacing boxes, we can do the same for their teams and their data. Today, we’re announcing a new platform that applies our network, and everything we’ve learned, to make the Internet faster and safer for teams.
Cloudflare for Teams protects enterprises, devices, and data by securing every connection without compromising user performance. The speed, reliability and protection we brought to securing infrastructure is extended to everything your team does on the Internet.

The legacy world of corporate security

Organizations all share three problems they need to solve at the network level:

  1. Secure team member access to internally managed applications
  2. Secure team members from threats on the Internet
  3. Secure the corporate data that lives in both environments

Each of these challenges pose a real risk to any team. If any component is compromised, the entire business becomes vulnerable.

Internally managed applications

Solving the first bucket, internally managed applications, started by building a perimeter around those internal resources. Administrators deployed applications on a private network and users outside of the office connected to them with client VPN agents through VPN appliances that lived back on-site.

Users hated it, and they still do, because it made it harder to get their jobs done. A sales team member traveling to a customer visit in the back of a taxi had to start a VPN client on their phone just to review details about the meeting. An engineer working remotely had to sit and wait as every connection they made to developer tools was backhauled  through a central VPN appliance.

Administrators and security teams also had issues with this model. Once a user connects to the private network, they’re typically able to reach multiple resources without having to prove they’re authorized to do so . Just because I’m able to enter the front door of an apartment building, doesn’t mean I should be able to walk into any individual apartment. However, on private networks, enforcing additional security within the bounds of the private network required complicated microsegmentation, if it was done at all.

Threats on the Internet

The second challenge, securing users connecting to SaaS tools on the public Internet and applications in the public cloud, required security teams to protect against known threats and potential zero-day attacks as their users left the castle and moat.

How did most companies respond? By forcing all traffic leaving branch offices or remote users back through headquarters and using the same hardware that secured their private network to try and build a perimeter around the Internet, at least the Internet their users accessed. All of the Internet-bound traffic leaving a branch office in Asia, for example, would be sent back through a central location in Europe, even if the destination was just down the street.

Organizations needed those connections to be stable, and to prioritize certain functions like voice and video, so they paid carriers to support dedicated multi-protocol label switching (MPLS) links. MPLS delivered improved performance by applying label switching to traffic which downstream routers can forward without needing to perform an IP lookup, but was eye-wateringly expensive.

Securing data

The third challenge, keeping data safe, became a moving target. Organizations had to keep data secure in a consistent way as it lived and moved between private tools on corporate networks and SaaS applications like Salesforce or Office 365.

The answer? More of the same. Teams backhauled traffic over MPLS links to a place where data could be inspected, adding more latency and introducing more hardware that had to be maintained.

What changed?

The balance of internal versus external traffic began to shift as SaaS applications became the new default for small businesses and Fortune 500s alike. Users now do most of their work on the Internet, with tools like Office 365 continuing to gain adoption. As those tools become more popular, more data leaves the moat and lives on the public Internet.

User behavior also changed. Users left the office and worked from multiple devices, both managed and unmanaged. Teams became more distributed and the perimeter was stretched to its limit.

This caused legacy approaches to fail

Legacy approaches to corporate security pushed the  castle and moat model further out. However, that model simply cannot scale with how users do work on the Internet today.

Internally managed applications

Private networks give users headaches, but they’re also a constant and complex chore to maintain. VPNs require expensive equipment that must be upgraded or expanded and, as more users leave the office, that equipment must try and scale up.

The result is a backlog of IT help desk tickets as users struggle with their VPN and, on the other side of the house, administrators and security teams try to put band-aids on the approach.

Threats on the Internet

Organizations initially saved money by moving to SaaS tools, but wound up spending more money over time as their traffic increased and bandwidth bills climbed.

Additionally, threats evolve. The traffic sent back to headquarters was secured with static models of scanning and filtering using hardware gateways. Users were still vulnerable to new types of threats that these on-premise boxes did not block yet.

Securing data

The cost of keeping data secure in both environments also grew. Security teams attempted to inspect Internet-bound traffic for threats and data loss by backhauling branch office traffic through on-premise hardware, degrading speed and increasing bandwidth fees.

Even more dangerous, data now lived permanently outside of that castle and moat model. Organizations were now vulnerable to attacks that bypassed their perimeter and targeted SaaS applications directly.

How will Cloudflare solve these problems?

Cloudflare for Teams consists of two products, Cloudflare Access and Cloudflare Gateway.

Security on the Internet with Cloudflare for Teams

We launched Access last year and are excited to bring it into Cloudflare for Teams. We built Cloudflare Access to solve the first challenge that corporate security teams face: protecting internally managed applications.

Cloudflare Access replaces corporate VPNs with Cloudflare’s network. Instead of placing internal tools on a private network, teams deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network.

Deploying Access does not require exposing new holes in corporate firewalls. Teams connect their resources through a secure outbound connection, Argo Tunnel, which runs in your infrastructure to connect the applications and machines to Cloudflare. That tunnel makes outbound-only calls to the Cloudflare network and organizations can replace complex firewall rules with just one: disable all inbound connections.

Administrators then build rules to decide who should authenticate to and reach the tools protected by Access. Whether those resources are virtual machines powering business operations or internal web applications, like Jira or iManage, when a user needs to connect, they pass through Cloudflare first.

When users need to connect to the tools behind Access, they are prompted to authenticate with their team’s SSO and, if valid, are instantly connected to the application without being slowed down. Internally-managed apps suddenly feel like SaaS products, and the login experience is seamless and familiar

Behind the scenes, every request made to those internal tools hits Cloudflare first where we enforce identity-based policies. Access evaluates and logs every request to those apps for identity, to give administrators more visibility and to offer more security than a traditional VPN.

Security on the Internet with Cloudflare for Teams

Every Cloudflare data center, in 200 cities around the world, performs the entire authentication check. Users connect faster, wherever they are working, versus having to backhaul traffic to a home office.

Access also saves time for administrators. Instead of configuring complex and error-prone network policies, IT teams build policies that enforce authentication using their identity provider. Security leaders can control who can reach internal applications in a single pane of glass and audit comprehensive logs from one source.

In the last year, we’ve released features that expand how teams can use Access so they can fully eliminate their VPN. We’ve added support for RDP, SSH, and released support for short-lived certificates that replace static keys. However, teams also use applications that do not run in infrastructure they control, such as SaaS applications like Box and Office 365. To solve that challenge, we’re releasing a new product, Cloudflare Gateway.

Security on the Internet with Cloudflare for Teams

Cloudflare Gateway secures teams by making the first destination a Cloudflare data center located near them, for all outbound traffic. The product places Cloudflare’s global network between users and the Internet, rather than forcing the Internet through legacy hardware on-site.

Cloudflare Gateway’s first feature begins by preventing users from running into phishing scams or malware sites by combining the world’s fastest DNS resolver with Cloudflare’s threat intelligence. Gateway resolver can be deployed to office networks and user devices in a matter of minutes. Once configured, Gateway actively blocks potential malware and phishing sites while also applying content filtering based on policies administrators configure.

However, threats can be hidden in otherwise healthy hostnames. To protect users from more advanced threats, Gateway will audit URLs and, if enabled, inspect  packets to find potential attacks before they compromise a device or office network. That same deep packet inspection can then be applied to prevent the accidental or malicious export of data.

Organizations can add Gateway’s advanced threat prevention in two models:

  1. by connecting office networks to the Cloudflare security fabric through GRE tunnels and
  2. by distributing forward proxy clients to mobile devices.
Security on the Internet with Cloudflare for Teams

The first model, delivered through Cloudflare Magic Transit, will give enterprises a way to migrate to Gateway without disrupting their current workflow. Instead of backhauling office traffic to centralized on-premise hardware, teams will point traffic to Cloudflare over GRE tunnels. Once the outbound traffic arrives at Cloudflare, Gateway can apply file type controls, in-line inspection, and data loss protection without impacting connection performance. Simultaneously, Magic Transit protects a corporate IP network from inbound attacks.

When users leave the office, Gateway’s client application will deliver the same level of Internet security. Every connection from the device will pass through Cloudflare first, where Gateway can apply threat prevention policies. Cloudflare can also deliver that security without compromising user experience, building on new technologies like the WireGuard protocol and integrating features from Cloudflare Warp, our popular individual forward proxy.

In both environments, one of the most common vectors for attacks is still the browser. Zero-day threats can compromise devices by using the browser as a vehicle to execute code.

Existing browser isolation solutions attempt to solve this challenge in one of two approaches: 1) pixel pushing and 2) DOM reconstruction. Both approaches lead to tradeoffs in performance and security. Pixel pushing degrades speed while also driving up the cost to stream sessions to users. DOM reconstruction attempts to strip potentially harmful content before sending it to the user. That tactic relies on known vulnerabilities and is still exposed to the zero day threats that isolation tools were meant to solve.

Cloudflare Gateway will feature always-on browser isolation that not only protects users from zero day threats, but can also make browsing the Internet faster. The solution will apply a patented approach to send vector commands that a browser can render without the need for an agent on the device. A user’s browser session will instead run in a Cloudflare data center where Gateway destroys the instance at the end of each session, keeping malware away from user devices without compromising performance.

When deployed, remote browser sessions will run in one of Cloudflare’s 200 data centers, connecting users to a faster, safer model of navigating the Internet without the compromises of legacy approaches. If you would like to learn more about this approach to browser isolation, I'd encourage you to read Darren Remington's blog post on the topic.

Why Cloudflare?

To make infrastructure safer, and web properties faster, Cloudflare built out one of the world’s largest and most sophisticated networks. Cloudflare for Teams builds on that same platform, and all of its unique advantages.

Fast

Security should always be bundled with performance. Cloudflare’s infrastructure products delivered better protection while also improving speed. That’s possible because of the network we’ve built, both its distribution and how the data we have about the network allows Cloudflare to optimize requests and connections.

Cloudflare for Teams brings that same speed to end users by using that same network and route optimization. Additionally, Cloudflare has built industry-leading components that will become features of this new platform. All of these components leverage Cloudflare’s network and scale to improve user performance.

Gateway’s DNS-filtering features build on Cloudflare’s 1.1.1.1 public DNS resolver, the world’s fastest resolver according to DNSPerf. To protect entire connections, Cloudflare for Teams will deploy the same technology that underpins Warp, a new type of VPN with consistently better reviews than competitors.

Massive scalability

Cloudflare’s 30 TBps of network capacity can scale to meet the needs of nearly any enterprise. Customers can stop worrying about buying enough hardware to meet their organization’s needs and, instead, replace it with Cloudflare.

Near users, wherever they are — literally

Cloudflare’s network operates in 200 cities and more than 90 countries around the world, putting Cloudflare’s security and performance close to users, wherever they work.

That network includes presence in global headquarters, like London and New York, but also in traditionally underserved regions around the world.

Cloudflare data centers operate within 100 milliseconds of 99% of Internet-connected population in the developed world, and within 100 milliseconds of 94% of the Internet-connected population globally. All of your end users should feel like they have the performance traditionally only available to those in headquarters.

Easier for administrators

When security products are confusing, teams make mistakes that become incidents. Cloudflare’s solution is straightforward and easy to deploy. Most security providers in this market built features first and never considered usability or implementation.

Cloudflare Access can be deployed in less than an hour; Gateway features will build on top of that dashboard and workflow. Cloudflare for Teams brings the same ease-of-use of our tools that protect infrastructure to the products that new secure users, devices, and data.

Better threat intelligence

Cloudflare’s network already secures more than 20 million Internet properties and blocks 72 billion cyber threats each day. We build products using the threat data we gather from protecting 11 million HTTP requests per second on average.

What’s next?

Cloudflare Access is available right now. You can start replacing your team’s VPN with Cloudflare’s network today. Certain features of Cloudflare Gateway are available in beta now, and others will be added in beta over time. You can sign up to be notified about Gateway now.

Categories: Technology

Cloudflare + Remote Browser Isolation

Tue, 07/01/2020 - 14:00
Cloudflare + Remote Browser Isolation

Cloudflare announced today that it has purchased S2 Systems Corporation, a Seattle-area startup that has built an innovative remote browser isolation solution unlike any other currently in the market. The majority of endpoint compromises involve web browsers — by putting space between users’ devices and where web code executes, browser isolation makes endpoints substantially more secure. In this blog post, I’ll discuss what browser isolation is, why it is important, how the S2 Systems cloud browser works, and how it fits with Cloudflare’s mission to help build a better Internet.

What’s wrong with web browsing?

It’s been more than 30 years since Tim Berners-Lee wrote the project proposal defining the technology underlying what we now call the world wide web. What Berners-Lee envisioned as being useful for “several thousand people, many of them very creative, all working toward common goals”[1] has grown to become a fundamental part of commerce, business, the global economy, and an integral part of society used by more than 58% of the world’s population[2].

The world wide web and web browsers have unequivocally become the platform for much of the productive work (and play) people do every day. However, as the pervasiveness of the web grew, so did opportunities for bad actors. Hardly a day passes without a major new cybersecurity breach in the news. Several contributing factors have helped propel cybercrime to unprecedented levels: the commercialization of hacking tools, the emergence of malware-as-a-service, the presence of well-financed nation states and organized crime, and the development of cryptocurrencies which enable malicious actors of all stripes to anonymously monetize their activities.

The vast majority of security breaches originate from the web. Gartner calls the public Internet a “cesspool of attacks” and identifies web browsers as the primary culprit responsible for 70% of endpoint compromises.[3] This should not be surprising. Although modern web browsers are remarkable, many fundamental architectural decisions were made in the 1990’s before concepts like security, privacy, corporate oversight, and compliance were issues or even considerations. Core web browsing functionality (including the entire underlying WWW architecture) was designed and built for a different era and circumstances.

In today’s world, several web browsing assumptions are outdated or even dangerous. Web browsers and the underlying server technologies encompass an extensive – and growing – list of complex interrelated technologies. These technologies are constantly in flux, driven by vibrant open source communities, content publishers, search engines, advertisers, and competition between browser companies. As a result of this underlying complexity, web browsers have become primary attack vectors. According to Gartner, “the very act of users browsing the internet and clicking on URL links opens the enterprise to significant risk. […] Attacking thru the browser is too easy, and the targets too rich.”[4] Even “ostensibly ‘good’ websites are easily compromised and can be used to attack visitors” (Gartner[5]) with more than 40% of malicious URLs found on good domains (Webroot[6]). (A complete list of vulnerabilities is beyond the scope of this post.)

The very structure and underlying technologies that power the web are inherently difficult to secure. Some browser vulnerabilities result from illegitimate use of legitimate functionality: enabling browsers to download files and documents is good, but allowing downloading of files infected with malware is bad; dynamic loading of content across multiple sites within a single webpage is good, but cross-site scripting is bad; enabling an extensive advertising ecosystem is good, but the inability to detect hijacked links or malicious redirects to malware or phishing sites is bad; etc.

Enterprise Browsing Issues

Enterprises have additional challenges with traditional browsers.

Paradoxically, IT departments have the least amount of control over the most ubiquitous app in the enterprise – the web browser. The most common complaints about web browsers from enterprise security and IT professionals are:

  1. Security (obviously). The public internet is a constant source of security breaches and the problem is growing given an 11x escalation in attacks since 2016 (Meeker[7]). Costs of detection and remediation are escalating and the reputational damage and financial losses for breaches can be substantial.
  2. Control. IT departments have little visibility into user activity and limited ability to leverage content disarm and reconstruction (CDR) and data loss prevention (DLP) mechanisms including when, where, or who downloaded/upload files.
  3. Compliance. The inability to control data and activity across geographies or capture required audit telemetry to meet increasingly strict regulatory requirements. This results in significant exposure to penalties and fines.

Given vulnerabilities exposed through everyday user activities such as email and web browsing, some organizations attempt to restrict these activities. As both are legitimate and critical business functions, efforts to limit or curtail web browser use inevitably fail or have a substantive negative impact on business productivity and employee morale.

Current approaches to mitigating security issues inherent in browsing the web are largely based on signature technology for data files and executables, and lists of known good/bad URLs and DNS addresses. The challenge with these approaches is the difficulty of keeping current with known attacks (file signatures, URLs and DNS addresses) and their inherent vulnerability to zero-day attacks. Hackers have devised automated tools to defeat signature-based approaches (e.g. generating hordes of files with unknown signatures) and create millions of transient websites in order to defeat URL/DNS blacklists.

While these approaches certainly prevent some attacks, the growing number of incidents and severity of security breaches clearly indicate more effective alternatives are needed.

What is browser isolation?

The core concept behind browser isolation is security-through-physical-isolation to create a “gap” between a user’s web browser and the endpoint device thereby protecting the device (and the enterprise network) from exploits and attacks. Unlike secure web gateways, antivirus software, or firewalls which rely on known threat patterns or signatures, this is a zero-trust approach.

There are two primary browser isolation architectures: (1) client-based local isolation and (2) remote isolation.

Local browser isolation attempts to isolate a browser running on a local endpoint using app-level or OS-level sandboxing. In addition to leaving the endpoint at risk when there is an isolation failure, these systems require significant endpoint resources (memory + compute), tend to be brittle, and are difficult for IT to manage as they depend on support from specific hardware and software components.

Further, local browser isolation does nothing to address the control and compliance issues mentioned above.

Remote browser isolation (RBI) protects the endpoint by moving the browser to a remote service in the cloud or to a separate on-premises server within the enterprise network:

  • On-premises isolation simply relocates the risk from the endpoint to another location within the enterprise without actually eliminating the risk.
  • Cloud-based remote browsing isolates the end-user device and the enterprise’s network while fully enabling IT control and compliance solutions.

Given the inherent advantages, most browser isolation solutions – including S2 Systems – leverage cloud-based remote isolation. Properly implemented, remote browser isolation can protect the organization from browser exploits, plug-ins, zero-day vulnerabilities, malware and other attacks embedded in web content.

How does Remote Browser Isolation (RBI) work?

In a typical cloud-based RBI system (the blue-dashed box ❶ below), individual remote browsers ❷ are run in the cloud as disposable containerized instances – typically, one instance per user. The remote browser sends the rendered contents of a web page to the user endpoint device ❹ using a specific protocol and data format ❸. Actions by the user, such as keystrokes, mouse and scroll commands, are sent back to the isolation service over a secure encrypted channel where they are processed by the remote browser and any resulting changes to the remote browser webpage are sent back to the endpoint device.

Cloudflare + Remote Browser Isolation

In effect, the endpoint device is “remote controlling” the cloud browser. Some RBI systems use proprietary clients installed on the local endpoint while others leverage existing HTML5-compatible browsers on the endpoint and are considered ‘clientless.’

Data breaches that occur in the remote browser are isolated from the local endpoint and enterprise network. Every remote browser instance is treated as if compromised and terminated after each session. New browser sessions start with a fresh instance. Obviously, the RBI service must prevent browser breaches from leaking outside the browser containers to the service itself. Most RBI systems provide remote file viewers negating the need to download files but also have the ability to inspect files for malware before allowing them to be downloaded.

A critical component in the above architecture is the specific remoting technology employed by the cloud RBI service. The remoting technology has a significant impact on the operating cost and scalability of the RBI service, website fidelity and compatibility, bandwidth requirements, endpoint hardware/software requirements and even the user experience. Remoting technology also determines the effective level of security provided by the RBI system.

All current cloud RBI systems employ one of two remoting technologies:

(1)    Pixel pushing is a video-based approach which captures pixel images of the remote browser ‘window’ and transmits a sequence of images to the client endpoint browser or proprietary client. This is similar to how remote desktop and VNC systems work. Although considered to be relatively secure, there are several inherent challenges with this approach:

  • Continuously encoding and transmitting video streams of remote webpages to user endpoint devices is very costly. Scaling this approach to millions of users is financially prohibitive and logistically complex.
  • Requires significant bandwidth. Even when highly optimized, pushing pixels is bandwidth intensive.
  • Unavoidable latency results in an unsatisfactory user experience. These systems tend to be slow and generate a lot of user complaints.
  • Mobile support is degraded by high bandwidth requirements compounded by inconsistent connectivity.
  • HiDPI displays may render at lower resolutions. Pixel density increases exponentially with resolution which means remote browser sessions (particularly fonts) on HiDPI devices can appear fuzzy or out of focus.

(2) DOM reconstruction emerged as a response to the shortcomings of pixel pushing. DOM reconstruction attempts to clean webpage HTML, CSS, etc. before forwarding the content to the local endpoint browser. The underlying HTML, CSS, etc., are reconstructed in an attempt to eliminate active code, known exploits, and other potentially malicious content. While addressing the latency, operational cost, and user experience issues of pixel pushing, it introduces two significant new issues:

  • Security. The underlying technologies – HTML, CSS, web fonts, etc. – are the attack vectors hackers leverage to breach endpoints. Attempting to remove malicious content or code is like washing mosquitos: you can attempt to clean them, but they remain inherent carriers of dangerous and malicious material. It is impossible to identify, in advance, all the means of exploiting these technologies even through an RBI system.
  • Website fidelity. Inevitably, attempting to remove malicious active code, reconstructing HTML, CSS and other aspects of modern websites results in broken pages that don’t render properly or don’t render at all. Websites that work today may not work tomorrow as site publishers make daily changes that may break DOM reconstruction functionality. The result is an infinite tail of issues requiring significant resources in an endless game of whack-a-mole. Some RBI solutions struggle to support common enterprise-wide services like Google G Suite or Microsoft Office 365 even as malware laden web email continues to be a significant source of breaches.
Cloudflare + Remote Browser Isolation

Customers are left to choose between a secure solution with a bad user experience and high operating costs, or a faster, much less secure solution that breaks websites. These tradeoffs have driven some RBI providers to implement both remoting technologies into their products. However, this leaves customers to pick their poison without addressing the fundamental issues.

Given the significant tradeoffs in RBI systems today, one common optimization for current customers is to deploy remote browsing capabilities to only the most vulnerable users in an organization such as high-risk executives, finance, business development, or HR employees. Like vaccinating half the pupils in a classroom, this results in a false sense of security that does little to protect the larger organization.

Unfortunately, the largest “gap” created by current remote browser isolation systems is the void between the potential of the underlying isolation concept and the implementation reality of currently available RBI systems.

S2 Systems Remote Browser Isolation

S2 Systems remote browser isolation is a fundamentally different approach based on S2-patented technology called Network Vector Rendering (NVR).

The S2 remote browser is based on the open-source Chromium engine on which Google Chrome is built. In addition to powering Google Chrome which has a ~70% market share[8], Chromium powers twenty-one other web browsers including the new Microsoft Edge browser.[9] As a result, significant ongoing investment in the Chromium engine ensures the highest levels of website support, compatibility and a continuous stream of improvements.

A key architectural feature of the Chromium browser is its use of the Skia graphics library. Skia is a widely-used cross-platform graphics engine for Android, Google Chrome, Chrome OS, Mozilla Firefox, Firefox OS, FitbitOS, Flutter, the Electron application framework and many other products. Like Chromium, the pervasiveness of Skia ensures ongoing broad hardware and platform support.

Cloudflare + Remote Browser IsolationSkia code fragment

Everything visible in a Chromium browser window is rendered through the Skia rendering layer. This includes application window UI such as menus, but more importantly, the entire contents of the webpage window are rendered through Skia. Chromium compositing, layout and rendering are extremely complex with multiple parallel paths optimized for different content types, device contexts, etc. The following figure is an egregious simplification for illustration purposes of how S2 works (apologies to Chromium experts):

Cloudflare + Remote Browser Isolation

S2 Systems NVR technology intercepts the remote Chromium browser’s Skia draw commands ❶, tokenizes and compresses them, then encrypts and transmits them across the wire ❷ to any HTML5 compliant web browser ❸ (Chrome, Firefox, Safari, etc.) running locally on the user endpoint desktop or mobile device. The Skia API commands captured by NVR are pre-rasterization which means they are highly compact.

On first use, the S2 RBI service transparently pushes an NVR WebAssembly (Wasm) library ❹ to the local HTML5 web browser on the endpoint device where it is cached for subsequent use. The NVR Wasm code contains an embedded Skia library and the necessary code to unpack, decrypt and “replay” the Skia draw commands from the remote RBI server to the local browser window. A WebAssembly’s ability to “execute at native speed by taking advantage of common hardware capabilities available on a wide range of platforms”[10] results in near-native drawing performance.

The S2 remote browser isolation service uses headless Chromium-based browsers in the cloud, transparently intercepts draw layer output, transmits the draw commands efficiency and securely over the web, and redraws them in the windows of local HTML5 browsers. This architecture has a number of technical advantages:

(1)    Security: the underlying data transport is not an existing attack vector and customers aren’t forced to make a tradeoff between security and performance.

(2)    Website compatibility: there are no website compatibility issues nor long tail chasing evolving web technologies or emerging vulnerabilities.

(3)    Performance: the system is very fast, typically faster than local browsing (subject of a future blog post).

(4)    Transparent user experience: S2 remote browsing feels like native browsing; users are generally unaware when they are browsing remotely.

(5)    Requires less bandwidth than local browsing for most websites. Enables advanced caching and other proprietary optimizations unique to web browsers and the nature of web content and technologies.

(6)    Clientless: leverages existing HTML5 compatible browsers already installed on user endpoint desktop and mobile devices.

(7)    Cost-effective scalability: although the details are beyond the scope of this post, the S2 backend and NVR technology have substantially lower operating costs than existing RBI technologies. Operating costs translate directly to customer costs. The S2 system was designed to make deployment to an entire enterprise and not just targeted users (aka: vaccinating half the class) both feasible and attractive for customers.

(8)    RBI-as-a-platform: enables implementation of related/adjacent services such as DLP, content disarm & reconstruction (CDR), phishing detection and prevention, etc.

S2 Systems Remote Browser Isolation Service and underlying NVR technology eliminates the disconnect between the conceptual potential and promise of browser isolation and the unsatisfying reality of current RBI technologies.

Cloudflare + S2 Systems Remote Browser Isolation

Cloudflare’s global cloud platform is uniquely suited to remote browsing isolation. Seamless integration with our cloud-native performance, reliability and advanced security products and services provides powerful capabilities for our customers.

Our Cloudflare Workers architecture enables edge computing in 200 cities in more than 90 countries and will put a remote browser within 100 milliseconds of 99% of the Internet-connected population in the developed world. With more than 20 million Internet properties directly connected to our network, Cloudflare remote browser isolation will benefit from locally cached data and builds on the impressive connectivity and performance of our network. Our Argo Smart Routing capability leverages our communications backbone to route traffic across faster and more reliable network paths resulting in an average 30% faster access to web assets.

Once it has been integrated with our Cloudflare for Teams suite of advanced security products, remote browser isolation will provide protection from browser exploits, zero-day vulnerabilities, malware and other attacks embedded in web content. Enterprises will be able to secure the browsers of all employees without having to make trade-offs between security and user experience. The service will enable IT control of browser-conveyed enterprise data and compliance oversight. Seamless integration across our products and services will enable users and enterprises to browse the web without fear or consequence.

Cloudflare’s mission is to help build a better Internet. This means protecting users and enterprises as they work and play on the Internet; it means making Internet access fast, reliable and transparent. Reimagining and modernizing how web browsing works is an important part of helping build a better Internet.

[1] https://www.w3.org/History/1989/proposal.html

[2] “Internet World Stats,”https://www.internetworldstats.com/, retrieved 12/21/2019.

[3] “Innovation Insight for Remote Browser Isolation,” (report ID: G00350577) Neil MacDonald, Gartner Inc, March 8, 2018”

[4] Gartner, Inc., Neil MacDonald, “Innovation Insight for Remote Browser Isolation”, 8 March 2018

[5] Gartner, Inc., Neil MacDonald, “Innovation Insight for Remote Browser Isolation”, 8 March 2018

[6] “2019 Webroot Threat Report: Forty Percent of Malicious URLs Found on Good Domains”, February 28, 2019

[7] “Kleiner Perkins 2018 Internet Trends”, Mary Meeker.

[8] https://www.statista.com/statistics/544400/market-share-of-internet-browsers-desktop/, retrieved December 21, 2019

[9] https://en.wikipedia.org/wiki/Chromium_(web_browser), retrieved December 29, 2019

[10] https://webassembly.org/, retrieved December 30, 2019

Categories: Technology

Cloudflare Expanded to 200 Cities in 2019

Sat, 04/01/2020 - 17:00
Cloudflare Expanded to 200 Cities in 2019Cloudflare Expanded to 200 Cities in 2019

We have exciting news: Cloudflare closed out the decade by reaching our 200th city* across 90+ countries. Each new location increases the security, performance, and reliability of the 20-million-plus Internet properties on our network. Over the last quarter, we turned up seven data centers spanning from Chattogram, Bangladesh all the way to the Hawaiian Islands:

  • Chattogram & Dhaka, Bangladesh. These data centers are our first in Bangladesh, ensuring that its 161 million residents will have a better experience on our network.
  • Honolulu, Hawaii, USA. Honolulu is one of the most remote cities in the world; with our Honolulu data center up and running, Hawaiian visitors can be served 2,400 miles closer than ever before! Hawaii is a hub for many submarine cables in the Pacific, meaning that some Pacific Islands will also see significant improvements.
  • Adelaide, Australia. Our 7th Australasian data center can be found “down under” in the capital of South Australia. Despite being Australia’s fifth-largest city, Adelaide is often overlooked for Australian interconnection. We, for one, are happy to establish a presence in it and its unique UTC+9:30 time zone!
  • Thimphu, Bhutan. Bhutan is the sixth SAARC (South Asian Association for Regional Cooperation) country with a Cloudflare network presence. Thimphu is our first Bhutanese data center, continuing our mission of security and performance for all.
  • St George’s, Grenada. Our Grenadian data center is joining the Grenada Internet Exchange (GREX), the first non-profit Internet Exchange (IX) in the English-speaking Caribbean.

We’ve come a long way since our launch in 2010, moving from colocating in key Internet hubs to fanning out across the globe and partnering with local ISPs. This has allowed us to offer security, performance, and reliability to Internet users in all corners of the world. In addition to the 35 cities we added in 2019, we expanded our existing data centers behind-the-scenes. We believe there are a lot of opportunities to harness in 2020 as we look to bring our network and its edge-computing power closer and closer to everyone on the Internet.

*Includes cities where we have data centers with active Internet ports and those where we are configuring our servers to handle traffic for more customers (at the time of publishing).

Categories: Technology

Adopting a new approach to HTTP prioritization

Tue, 31/12/2019 - 19:13
Adopting a new approach to HTTP prioritizationAdopting a new approach to HTTP prioritization

Friday the 13th is a lucky day for Cloudflare for many reasons. On December 13, 2019 Tommy Pauly, co-chair of the IETF HTTP Working Group, announced the adoption of the "Extensible Prioritization Scheme for HTTP" - a new approach to HTTP prioritization.

Web pages are made up of many resources that must be downloaded before they can be presented to the user. The role of HTTP prioritization is to load the right bytes at the right time in order to achieve the best performance. This is a collaborative process between client and server, a client sends priority signals that the server can use to schedule the delivery of response data. In HTTP/1.1 the signal is basic, clients order requests smartly across a pool of about 6 connections. In HTTP/2 a single connection is used and clients send a signal per request, as a frame, which describes the relative dependency and weighting of the response. HTTP/3 tried to use the same approach but dependencies don't work well when signals can be delivered out of order.

HTTP/3 is being standardised as part of the QUIC effort. As a Working Group (WG) we've been trying to fix the problems that non-deterministic ordering poses for HTTP priorities. However, in parallel some of us have been working on an alternative solution, the Extensible Prioritization Scheme, which fixes problems by dropping dependencies and using an absolute weighting. This is signalled in an HTTP header field meaning it can be backported to work with HTTP/2 or carried over HTTP/1.1 hops. The alternative proposal is documented in the Individual-Draft draft-kazuho-httpbis-priority-04, co-authored by Kazuho Oku (Fastly) and myself. This has now been adopted by the IETF HTTP WG as the basis of further work; It's adopted name will be draft-ietf-httpbis-priority-00.

To some extent document adoption is the end of one journey and the start of the next; sometimes the authors of the original work are not the best people to oversee the next phase. However, I'm pleased to say that Kazuho and I have been selected as co-editors of this new document. In this role we will reflect the consensus of the WG and help steward the next chapter of HTTP prioritization standardisation. Before the next journey begins in earnest, I wanted to take the opportunity to share my thoughts on the story of developing the alternative prioritization scheme through 2019.

I'd love to explain all the details of this new approach to HTTP prioritization but the truth is I expect the standardization process to refine the design and for things to go stale quickly. However, it doesn't hurt to give a taste of what's in store, just be aware that it is all subject to change.

A recap on priorities

The essence of HTTP prioritization comes down to trying to download many things over constrained connectivity. To borrow some text from Pat Meenan: Web pages are made up of dozens (sometimes hundreds) of separate resources that are loaded and assembled by a browser into the final displayed content. Since it is not possible to download everything immediately, we prefer to fetch more important things before less important ones. The challenge comes in signalling the importance from client to server.

In HTTP/2, every connection has a priority tree that expresses the relative importance between requests. Servers use this to determine how to schedule sending response data. The tree starts with a single root node and as requests are made they either depend on the root or each other. Servers may use the tree to decide how to schedule sending resources but clients cannot force a server to behave in any particular way.

To illustrate, imagine a client that makes three simple GET requests that all depend on root. As the server receives each request it grows its view of the priority tree:

Adopting a new approach to HTTP prioritizationThe server starts with only the root node of the priority tree. As requests arrive, the tree grows. In this case all requests depend on the root, so the requests are priority siblings.

Once all requests are received, the server determines all requests have equal priority and that it should send response data using round-robin scheduling: send some fraction of response 1, then a fraction of response 2, then a fraction of response 3, and repeat until all responses are complete.

A single HTTP/2 request-response exchange is made up of frames that are sent on a stream. A simple GET request would be sent using a single HEADERS frame:

Adopting a new approach to HTTP prioritizationHTTP/2 HEADERS frame, Each region of a frame is a named field

Each region of a frame is a named field, a '?' indicates the field is optional and the value in parenthesis is the length in bytes with '*' meaning variable length. The Header Block Fragment field holds compressed HTTP header fields (using HPACK), Pad Length and Padding relate to optional padding, and E, Stream Dependency and Weight combined are the priority signal that controls the priority tree.

The Stream Dependency and Weight fields are optional but their absence is interpreted as a signal to use the default values; dependency on the root with a weight of 16 meaning that the default priority scheduling strategy is round-robin . However, this is often a bad choice because important resources like HTML, CSS and JavaScript are tied up with things like large images. The following animation demonstrates this in the Edge browser, causing the page to be blank for 19 seconds. Our deep dive blog post explains the problem further.

Adopting a new approach to HTTP prioritization

The HEADERS frame E field is the interesting bit (pun intended). A request with the field set to 1 (true) means that the dependency is exclusive and nothing else can depend on the indicated node. To illustrate, imagine a client that sends three requests which set the E field to 1. As the server receives each request, it interprets this as an exclusive dependency on the root node. Because all requests have the same dependency on root, the tree has to be shuffled around to satisfy the exclusivity rules.

Adopting a new approach to HTTP prioritizationEach request has an exclusive dependency on the root node. The tree is shuffled as each request is received by the server.

The final version of the tree looks very different from our previous example. The server would schedule all of response 3, then all of response 2, then all of response 1. This could help load all of an HTML file before an image and thus improve the visual load behaviour.

In reality, clients load a lot more than three resources and use a mix of priority signals. To understand the priority of any single request, we need to understand all requests. That presents some technological challenges, especially for servers that act like proxies such as the Cloudflare edge network. Some servers have problems applying prioritization effectively.

Because not all clients send the most optimal priority signals we were motivated to develop Cloudflare's Enhanced HTTP/2 Prioritization, announced last May during Speed Week. This was a joint project between the Speed team (Andrew Galloni, Pat Meenan, Kornel Lesiński) and Protocols team (Nick Jones, Shih-Chiang Chien) and others. It replaces the complicated priority tree with a simpler scheme that is well suited to web resources. Because the feature is implemented on the server side, we avoid requiring any modification of clients or the HTTP/2 protocol itself. Be sure to check out my colleague Nick's blog post that details some of the technical challenges and changes needed to let our servers deliver smarter priorities.

The Extensible Prioritization Scheme proposal

The scheme specified in draft-kazuho-httpbis-priority-04, defines a way for priorities to be expressed in absolute terms. It replaces HTTP/2's dependency-based relative prioritization, the priority of a request is independent of others, which makes it easier to reason about and easier to schedule.

Rather than send the priority signal in a frame, the scheme defines an HTTP header - tentatively named "Priority" - that can carry an urgency on a scale of 0 (highest) to 7 (lowest). For example, a client could express the priority of an important resource by sending a request with:

Priority: u=0

And a less important background resource could be requested with:

Priority: u=7

While Kazuho and I are the main authors of this specification, we were inspired by several ideas in the Internet community, and we have incorporated feedback or direct input from many of our peers in the Internet community over several drafts. The text today reflects the efforts-so-far of cross-industry work involving many engineers and researchers including organizations such Adobe, Akamai, Apple, Cloudflare, Fastly, Facebook, Google, Microsoft, Mozilla and UHasselt. Adoption in the HTTP Working Group means that we can help improve the design and specification by spending some IETF time and resources for broader discussion, feedback and implementation experience.

The backstory

I work in Cloudflare's Protocols team which is responsible for terminating HTTP at the edge. We deal with things like TCP, TLS, QUIC, HTTP/1.x, HTTP/2 and HTTP/3 and since joining the company I've worked with Alessandro Ghedini, Junho Choi and Lohith Bellad to make QUIC and HTTP/3 generally available last September.

Working on emerging standards is fun. It involves an eclectic mix of engineering, meetings, document review, specification writing, time zones, personalities, and organizational boundaries. So while working on the codebase of quiche, our open source implementation of QUIC and HTTP/3, I am also mulling over design details of the protocols and discussing them in cross-industry venues like the IETF.

Because of HTTP/3's lineage, it carries over a lot of features from HTTP/2 including the priority signals and tree described earlier in the post.

One of the key benefits of HTTP/3 is that it is more resilient to the effect of lossy network conditions on performance; head-of-line blocking is limited because requests and responses can progress independently. This is, however, a double-edged sword because sometimes ordering is important. In HTTP/3 there is no guarantee that the requests are received in the same order that they were sent, so the priority tree can get out of sync between client and server. Imagine a client that makes two requests that include priority signals stating request 1 depends on root, request 2 depends on request 1. If request 2 arrives before request 1, the dependency cannot be resolved and becomes dangling. In such a case what is the best thing for a server to do? Ambiguity in behaviour leads to assumptions and disappointment. We should try to avoid that.

Adopting a new approach to HTTP prioritizationRequest 1 depends on root and request 2 depends on request 1. If an HTTP/3 server receives request 2 first, the dependency cannot be resolved.

This is just one example where things get tricky quickly. Unfortunately the WG kept finding edge case upon edge case with the priority tree model. We tried to find solutions but each additional fix seemed to create further complexity to the HTTP/3 design. This is a problem because it makes it hard to implement a server that handles priority correctly.

In parallel to Cloudflare's work on implementing a better prioritization for HTTP/2, in January 2019 Pat posted his proposal for an alternative prioritization scheme for HTTP/3 in a message to the IETF HTTP WG.

Arguably HTTP/2 prioritization never lived up to its hype. However, replacing it with something else in HTTP/3 is a challenge because the QUIC WG charter required us to try and maintain parity between the protocols. Mark Nottingham, co-chair of the HTTP and QUIC WGs responded with a good summary of the situation. To quote part of that response:

My sense is that people know that we need to do something about prioritisation, but we're not yet confident about any particular solution. Experimentation with new schemes as HTTP/2 extensions would be very helpful, as it would give us some data to work with. If you'd like to propose such an extension, this is the right place to do it.

And so started a very interesting year of cross-industry discussion on the future of HTTP prioritization.

A year of prioritization

The following is an account of my personal experiences during 2019. It's been a busy year and there may be unintentional errors or omissions, please let me know if you think that is the case. But I hope it gives you a taste of the standardization process and a look behind the scenes of how new Internet protocols that benefit everyone come to life.

January

Pat's email came at the same time that I was attending the QUIC WG Tokyo interim meeting hosted at Akamai (thanks to Mike Bishop for arrangements). So I was able to speak to a few people face-to-face on the topic. There was a bit of mailing list chatter but it tailed off after a few days.

February to April

Things remained quiet in terms of prioritization discussion. I knew the next best opportunity to get the ball rolling would be the HTTP Workshop 2019 held in April. The workshop is a multi-day event not associated with a standards-defining-organization (even if many of the attendees also go to meetings such as the IETF or W3C). It is structured in a way that allows the agenda to be more fluid than a typical standards meeting and gives plenty of time for organic conversation. This sometimes helps overcome gnarly problems, such as the community finding a path forward for WebSockets over HTTP/2 due to a productive discussion during the 2017 workshop. HTTP prioritization is a gnarly problem, so I was inspired to pitch it as a talk idea. It was selected and you can find the full slide deck here.

During the presentation I recounted the history of HTTP prioritization. The great thing about working on open standards is that many email threads, presentation materials and meeting materials are publicly archived. It's fun digging through this history. Did you know: HTTP/2 is based on SPDY and inherited its weight-based prioritization scheme, the tree-based scheme we are familiar with today was only introduced in draft-ietf-httpbis-http2-11? One of the reasons for the more-complicated tree was to help HTTP intermediaries (a.k.a. proxies) implement clever resource management. However, it became clear during the discussion that no intermediaries implement this, and none seem to plan to. I also explained a bit more about Pat's alternative scheme and Nick described his implementation experiences. Despite some interesting discussion around the topic however, we didn't come to any definitive solution. There were a lot of other interesting topics to discover that week.

May

In early May, Ian Swett (Google) restarted interest in Pat's mailing list thread. Unfortunately he was not present at the HTTP Workshop so had some catching up to do. A little while later Ian submitted a Pull Request to the HTTP/3 specification called "Strict Priorities". This incorporated Pat's proposal and attempted to fix a number of those prioritization edge cases that I mentioned earlier.

In late May, another QUIC WG interim meeting was held in London at the new Cloudflare offices, here is the view from the meeting room window. Credit to Alessandro for handling the meeting arrangements.

Thanks to @cloudflare for hosting our interop and interim meetings in London this week! pic.twitter.com/LIOA3OqEjr

— IETF QUIC WG (@quicwg) May 23, 2019

Mike, the editor of the HTTP/3 specification presented some of the issues with prioritization and we attempted to solve them with the conventional tree-based scheme. Ian, with contribution from Robin Marx (UHasselt), also presented an explanation about his "Strict Priorities" proposal. I recommend taking a look at Robin's priority tree visualisations which do a great job of explaining things. From that presentation I particularly liked "The prioritization spectrum", it's a concise snapshot of the state of things at that time:

Adopting a new approach to HTTP prioritizationAn overview of HTTP/3 prioritization issues, fixes and possible alternatives. Presented by Ian Swett at the QUIC Interim Meeting May 2019.June and July

Following the interim meeting, the prioritization "debate" continued electronically across GitHub and email. Some time in June Kazuho started work on a proposal that would use a scheme similar to Pat and Ian's absolute priorities. The major difference was that rather than send the priority signal in an HTTP frame, it would use a header field. This isn't a new concept, Roy Fielding proposed something similar at IETF 83.

In HTTP/2 and HTTP/3 requests are made up of frames that are sent on streams. Using a simple GET request as an example: a client sends a HEADERS frame that contains the scheme, method, path, and other request header fields. A server responds with a HEADERS frame that contains the status and response header fields, followed by DATA frame(s) that contain the payload.

To signal priority, a client could also send a PRIORITY frame. In the tree-based scheme the frame carries several fields that express dependencies and weights. Pat and Ian's proposals changed the contents of the PRIORITY frame. Kazuho's proposal encodes the priority as a header field that can be carried in the HEADERS frame as normal metadata, removing the need for the PRIORITY frame altogether.

I liked the simplification of Kazuho's approach and the new opportunities it might create for application developers. HTTP/2 and HTTP/3 implementations (in particular browsers) abstract away a lot of connection-level details such as stream or frames. That makes it hard to understand what is happening or to tune it.

The lingua franca of the Web is HTTP requests and responses, which are formed of header fields and payload data. In browsers, APIs such as Fetch and Service Worker allow handling of these primitives. In servers, there may be ways to interact with the primitives via configuration or programming languages. As part of Enhanced HTTP/2 Prioritization, we have exposed prioritization to Cloudflare Workers to allow rich behavioural customization. If a Worker adds the "cf-priority" header to a response, Cloudflare’s edge servers use the specified priority to serve the response. This might be used to boost the priority of a resource that is important to the load time of a page. To help inform this decision making, the incoming browser priority signal is encapsulated in the request object passed to a Worker's fetch event listener (request.cf.requestPriority).

Standardising approaches to problems is part of helping to build a better Internet. Because of the resonance between Cloudflare's work and Kazuho's proposal, I asked if he would consider letting me come aboard as a co-author. He kindly accepted and on July 8th we published the first version as an Internet-Draft.

Meanwhile, Ian was helping to drive the overall prioritization discussion and proposed that we use time during IETF 105 in Montreal to speak to a wider group of people. We kicked off the week with a short presentation to the HTTP WG from Ian, and Kazuho and I presented our draft in a side-meeting that saw a healthy discussion. There was a realization that the concepts of prioritization scheme, priority signalling and server resource scheduling (enacting prioritization) were conflated and made effective communication and progress difficult. HTTP/2's model was seen as one aspect, and two different I-Ds were created to deprecate it in some way (draft-lassey-priority-setting, draft-peon-httpbis-h2-priority-one-less). Martin Thomson (Mozilla) also created a Pull Request that simply removed the PRIORITY frame from HTTP/3.

To round off the week, in the second HTTP session it was decided that there was sufficient interest in resolving the prioritization debate via the creation of a design team. I joined the team led by Ian Swett along with others from Adobe, Akamai, Apple, Cloudflare, Fastly, Facebook, Google, Microsoft, and UHasselt.

August to October

Martin's PR generated a lot of conversation. It was merged under proviso that some solution be found before the HTTP/3 specification was finalized. Between May and August we went from something very complicated (e.g. Orphan placeholder, with PRIORITY only on control stream, plus exclusive priorities) to a blank canvas. The pressure was now on!

The design team held several teleconference meetings across the months. Logistics are a bit difficult when you have team members distributed across West Coast America, East Coast America, Western Europe, Central Europe, and Japan. However, thanks to some late nights and early mornings we managed to all get on the call at the same time.

In October most of us travelled to Cupertino, CA to attend another QUIC interim meeting hosted at Apple's Infinite Loop (Eric Kinnear helping with arrangements).  The first two days of the meeting were used for interop testing and were loosely structured, so the design team took the opportunity to hold the first face-to-face meeting. We made some progress and helped Ian to form up some new slides to present later in the week. Again, there was some useful discussion and signs that we should put some time in the agenda in IETF 106.

November

The design team came to agreement that draft-kazuho-httpbis-priority was a good basis for a new prioritization scheme. We decided to consolidate the various I-Ds that had sprung up during IETF 105 into the document, making it a single source that was easier for people to track progress and open issues if required. This is why, even though Kazuho and I are the named authors, the document reflects a broad input from the community. We published draft 03 in November, just ahead of the deadline for IETF 106 in Singapore.

Many of us travelled to Singapore ahead of the actual start of IETF 106. This wasn't to squeeze in some sightseeing (sadly) but rather to attend the IETF Hackathon. These are events where engineers and researchers can really put the concept of "running code" to the test. I really enjoy attending and I'm grateful to Charles Eckel and the team that organised it. If you'd like to read more about the event, Charles wrote up a nice blog post that, through some strange coincidence, features a picture of me, Kazuho and Robin talking at the QUIC table.

Link: https://t.co/8qP78O6cPS

— Lucas Pardue (@SimmerVigor) December 17, 2019

The design team held another face-to-face during a Hackathon lunch break and decided that we wanted to make some tweaks to the design written up in draft 03. Unfortunately the freeze was still in effect so we could not issue a new draft. Instead, we presented the most recent thinking to the HTTP session on Monday where Ian put forward draft-kazuho-httpbis-priority as the group's proposed design solution. Ian and Robin also shared results of prioritization experiments. We received some great feedback in the meeting and during the week pulled out all the stops to issue a new draft 04 before the next HTTP session on Thursday. The question now was: Did the WG think this was suitable to adopt as the basis of an alternative prioritization scheme? I think we addressed a lot of the feedback in this draft and there was a general feeling of support in the room. However, in the IETF consensus is declared via mailing lists and so Tommy Pauly, co-chair of the HTTP WG, put out a Call for Adoption on November 21st.

December

In the Cloudflare London office, preparations begin for mince pie acquisition and assessment.

The HTTP priorities team played the waiting game and watched the mailing list discussion. On the whole people supported the concept but there was one topic that divided opinion. Some people loved the use of headers to express priorities, some people didn't and wanted to stick to frames.

On December 13th Tommy announced that the group had decided to adopt our document and assign Kazuho and I as editors. The header/frame divide was noted as something that needed to be resolved.

The next step of the journey

Just because the document has been adopted does not mean we are done. In some ways we are just getting started. Perfection is often the enemy of getting things done and so sometimes adoption occurs at the first incarnation of a "good enough" proposal.

Today HTTP/3 has no prioritization signal. Without priority information there is a small danger that servers pick a scheduling strategy that is not optimal, that could cause the web performance of HTTP/3 to be worse than HTTP/2. To avoid that happening we'll refine and complete the design of the Extensible Priority Scheme. To do so there are open issues that we have to resolve, we'll need to square the circle on headers vs. frames, and we'll no doubt hit unknown unknowns. We'll need the input of the WG to make progress and their help to document the design that fits the need, and so I look forward to continued collaboration across the Internet community.

2019 was quite a ride and I'm excited to see what 2020 brings.

If working on protocols is your interest and you like what Cloudflare is doing, please visit our careers page. Our journey isn’t finished, in fact far from it.

Categories: Technology

Happy Holidays!

Fri, 27/12/2019 - 19:30
Happy Holidays!

I joined Cloudflare in July of 2019, but I've known of Cloudflare for years. I always read the blog posts and looked at the way the company was engaging with the community. I also noticed the diversity in the names of many of the blog post authors.

There are over 50 languages spoken at Cloudflare, as we have natives from many countries on our team, with different backgrounds, religions, gender and cultures. And it is this diversity that makes us a great team.

A few days ago I asked one of my colleagues how he would say "Happy Holidays!" in Arabic. When I heard him say it, I instantly got the idea of recording a video in as many languages as possible of our colleagues wishing all of you, our readers and customers, a happy winter season.

It only took one internal message for people to start responding and sending their videos to me. Some did it themselves, others flocked in a meeting room and helped each other record their greeting. It took a few days and some video editing to put together an informal video that was entirely done by the team, to wish you all the best as we close this year and decade.

So here it is: Happy Holidays from all of us at Cloudflare!

Let us know if you speak any of the languages in the video. Or maybe you can tell us how you greet each other, at this time of the year, in your native language.

Categories: Technology
Additional Terms