Blogroll: CloudFlare

I read blogs, as well as write one. The 'blogroll' on this site reproduces some posts from some of the people I enjoy reading. There are currently 38 posts from the blog 'CloudFlare.'

Disclaimer: Reproducing an article here need not necessarily imply agreement or endorsement!

Subscribe to CloudFlare feed
Cloudflare Blog
Updated: 52 min 35 sec ago

Unmetered Mitigation: DDoS Protection Without Limits

Mon, 25/09/2017 - 14:00

This is the week of Cloudflare's seventh birthday. It's become a tradition for us to announce a series of products each day of this week and bring major new benefits to our customers. We're beginning with one I'm especially proud of: Unmetered Mitigation.

CC BY-SA 2.0 image by Vassilis

Cloudflare runs one of the largest networks in the world. One of our key services is DDoS mitigation and we deflect a new DDoS attack aimed at our customers every three minutes. We do this with over 15 terabits per second of DDoS mitigation capacity. That's more than the publicly announced capacity of every other DDoS mitigation service we're aware of combined. And we're continuing to invest in our network to expand capacity at an accelerating rate.

Surge Pricing

Virtually every Cloudflare competitor will send you a bigger bill if you are unlucky enough to get targeted by an attack. We've seen examples of small businesses that survive massive attacks to then be crippled by the bills other DDoS mitigation vendors sent them. From the beginning of Cloudflare's history, it never felt right that you should have to pay more if you came under an attack. That feels barely a step above extortion.

With today’s announcement we are eliminating this industry standard of ‘surge pricing’ for DDoS attacks. Why should customers pay more just to defend themselves? Charging more when the customer is experiencing a painful attack feels wrong; just as surge pricing when it rains hurts ride-sharing customers when they need a ride the most.

End of the FINT

That said, from our early days, we would sometimes fail customers off our network if the size of an attack they received got large enough that it affected other customers. Internally, we referred to this as FINTing (for Fail INTernal) a customer.

The standards for when a customer would get FINTed were situation dependent. We had rough thresholds depending on what plan they were on, but the general rule was to keep a customer online unless the size of the attack impacted other customers. For customers on higher tiered plans, when our automated systems didn't handle the attacks themselves, our technical operations team could take manual steps to protect them.

Every morning I receive a list of all the customers that were FINTed the day before. Over the last four years the number of FINTs has dwindled. The reality is that our network today is at such a scale that we are able to mitigate even the largest DDoS attacks without it impacting other customers. This is almost always handled automatically. And, when manual intervention is required, our techops team has gotten skilled enough that it isn't overly taxing.

Aligning With Our Customers

So today, on the first day of our Birthday Week celebration, we make it official for all our customers: Cloudflare will no longer terminate customers, regardless of the size of the DDoS attacks they receive, regardless of the plan level they use. And, unlike the prevailing practice in the industry, we will never jack up your bill after the attack. Doing so, frankly, is perverse.

CC BY-SA 2.0 image by Dennis Jarvis

We call this Unmetered Mitigation. It stems from a basic idea: you shouldn't have to pay more to be protected from bullies who try and silence you online. Regardless of what Cloudflare plan you use — Free, Pro, Business, or Enterprise — we will never tell you to go away or that you need to pay us more because of the size of an attack.

Cloudflare's higher tier plans will continue to offer more sophisticated reporting, tools, and customer support to better tune our protections against whatever threats you face online. But volumetric DDoS mitigation is now officially unlimited and unmetered.

Setting the New Standard

Back in 2014, during Cloudflare's birthday week, we announced that we were making encryption free for all our customers. We did it because it was the right thing to do and we'd finally developed the technical systems we needed to do it at scale. At the time, people said we were crazy. I'm proud of the fact that, three years later, the rest of the industry has followed our lead and encryption by default has become the standard.

I'm hopeful the same will happen with DDoS mitigation. If the rest of the industry moves away from the practice of surge pricing and builds DDoS mitigation in by default then it would largely end DDoS attacks for good. We took a step down that path today and hope, like with encryption, the rest of the industry will follow.

Want to know more? Read No Scrubs: The Architecture That Made Unmetered Mitigation Possible and Meet Gatebot - a bot that allows us to sleep.

Categories: Technology

No Scrubs: The Architecture That Made Unmetered Mitigation Possible

Mon, 25/09/2017 - 14:00

When building a DDoS mitigation service it’s incredibly tempting to think that the solution is scrubbing centers or scrubbing servers. I, too, thought that was a good idea in the beginning, but experience has shown that there are serious pitfalls to this approach.

A scrubbing server is a dedicated machine that receives all network traffic destined for an IP address and attempts to filter good traffic from bad. Ideally, the scrubbing server will only forward non-DDoS packets to the Internet application being attacked. A scrubbing center is a dedicated location filled with scrubbing servers.

Three Problems With Scrubbers

The three most pressing problems with scrubbing are: bandwidth, cost, knowledge.

The bandwidth problem is easy to see. As DDoS attacks have scaled to >1Tbps having that much network capacity available is problematic. Provisioning and maintaining multiple-Tbps of bandwidth for DDoS mitigation is expensive and complicated. And it needs to be located in the right place on the Internet to receive and absorb an attack. If it’s not then attack traffic will need to be received at one location, scrubbed, and then clean traffic forwarded to the real server: that can introduce enormous delays with a limited number of locations.

Imagine for a moment you’ve built a small number of scrubbing centers, and each center is connected to the Internet with many Gbps of connectivity. When a DDoS attack occurs that center needs to be able to handle potentially 100s of Gbps of attack traffic at line rate. That means exotic network and server hardware. Everything from the line cards in routers, to the network adapter cards in the servers, to the servers themselves is going to be very expensive.

This (and bandwidth above) is one of the reasons DDoS mitigation has traditionally cost so much and been billed by attack size.

The final problem, knowledge, is the most easily overlooked. When you set out to build a scrubbing server you are building something that has to separate good packets from bad.

At first this seems easy (let’s filter out all TCP ACK packets for non-established connections, for example), and low level engineers are easy to excite about writing high-performance code to do that. But attackers are not stupid and they’ll throw legitimate looking traffic at a scrubbing server and it gets harder and harder to distinguish good from bad.

At that point, scrubbing engineers need to become protocol experts at all levels of the stack. That means you have to build a competency in all levels of TCP/IP, DNS, HTTP, TLS, etc. And that’s hard.

CC BY-SA 2.0 image by Lisa Stevens

The bottom line is scrubbing centers and exotic hardware are great marketing. But, like citadels of medieval times, they are monumentally expensive and outdated, overwhelmed by better weapons and warfighting techniques.

And many DDoS mitigation services that use scrubbing centers operate in an offline mode. They are only enabled when a DDoS occurs. This typically means that an Internet application will succumb to the DDoS attack before its traffic is diverted to the scrubbing center.

Just imagine citizens fleeing to hide behind the walls of the citadel under fire from an approaching army.

Better, Cheaper, Smarter

There’s a subtler point about not having dedicated scrubbers: it forces us to build better software. If a scrubbing server becomes overwhelmed or fails then only the customer being scrubbed is affected, but when the mitigation happens on the very servers running the core service it has to work and be effective.

I spoke above about the ‘knowledge gap’ that comes about with dedicated DDoS scrubbing. The Cloudflare approach means that if bad traffic gets through, say a flood of bad DNS packets, then it reaches a service owned and operated by people who are experts in that domain. If a DNS flood gets through our DDoS protection it hits our custom DNS server, RRDNS, the engineers who work on it can bring their expertise to bear.

This makes an enormous difference because the result is either improved DDoS scrubbing or a change to the software (e.g. the DNS stack) that improves its performance under load. We’ve lived that story many, many times and the entire software stack has improved because of it.

The approach Cloudflare took to DDoS mitigation is rather simple: make every single server in Cloudflare participate in mitigation, load balance DDoS attacks across the data centers and servers within them and then apply smarts to the handling of packets. These are the same servers, processors and cores handling our entire service.

Eliminating scrubbing centers and hardware completely changes the cost of building a DDoS mitigation service.

We currently have around 15 Tbps of network capacity worldwide but this capacity doesn’t require exotic network hardware. We are able to use low cost or commodity networking equipment bound together using network automation to handle normal and DDoS traffic. Just as Google originally built its service by writing software that tied together commodity servers into a super (search) computer; our architecture binds commodity servers together into one giant network device.

By building the world’s most peered network we’ve built this capacity at reasonable cost and more importantly are able to handle attack traffic globally wherever it originates with low latency links. No scrubbing solution is able to say the same.

And because Cloudflare manages DNS for our customers and uses an Anycasted network attack traffic originating from botnets is automatically distributed across our global network. Each data center deals with a portion of DDoS traffic.

Within each data center DDoS traffic is load balanced across multiple servers running our service. Each server handles a portion of the DDoS traffic. This spreading of DDoS traffic means that a single DDoS attack will be handled by a large number of individual servers across the world.

And as Cloudflare grows our DDoS mitigation capacity grows automatically, and because our DDoS mitigation is built into our stack it is always on. We mitigate a new DDoS attack every three minutes with no downtime for Internet applications and have no need to ‘switch over’ to a scrubbing center.

Inside a Server

Once all this global and local load balancing has occurred packets do finally hit a network adapter card in a server. It’s here that Cloudflare’s custom DDoS mitigation stack comes into play.

Over the years we’ve learned how to automatically detect and mitigate anything the internet can throw at us. For most of the attacks, we rely on dynamically managing iptables: the standard Linux firewall. We’ve spoked about the most effective techniques in past. iptables has a number of very powerful features which we select depending on specific attack vector. From our experience xt_bpf, ipset, hashlimits and connlimits are the most useful iptables modules.

For very large attacks the Linux Kernel is not fast enough though. To relieve the kernel from processing excessive number of packets, we experimented with various kernel bypass techniques. We’ve settled on a partial kernel bypass interface - Solarflare specific EFVI.

With EFVI we can offload the processing of our firewall rules to a user space program, and we can easily process millions of packets per second on each server, while keeping the CPU usage low. This allows us to withstand the largest attacks, without affecting our multi-tenant service.

Open Source

Cloudflare’s vision is to help to build a better internet. Fixing DDoS is a part of it. While we can’t really help with the bandwidth, and cost, needed to operate on the internet, we can, and are, helping with the knowledge gap. We’ve been relentlessly documenting the most important and dangerous attacks we’ve encountered, fighting botnets and open sourcing critical pieces of our DDoS infrastructure.

We’ve open sourced various tools, from the very low level projects like our BPF Tools, that we use to fight DNS and SYN floods, to contributing to OpenResty a performant application framework on top of NGINX, which is great for building L7 defenses.

Further Reading

Cloudflare has written a great deal about DDoS mitigation in the past. Some example, blog posts: How Cloudflare's Architecture Allows Us to Scale to Stop the Largest Attacks, Reflections on reflection (attacks), The Daily DDoS: Ten Days of Massive Attacks, and The Internet is Hostile: Building a More Resilient Network.

And if you want to go deeper, my colleague Marek Majkowski dives deeper into the code we use DDoS mitigation.


Cloudflare’s DDoS mitigation architecture and custom software makes Unmetered Mitigation possible. With it we can withstand the largest DDoS attacks and as our network grows our DDoS mitigation capability grows with it.

Categories: Technology

Meet Gatebot - a bot that allows us to sleep

Mon, 25/09/2017 - 14:00

In the past, we’ve spoken about how Cloudflare is architected to sustain the largest DDoS attacks. During traffic surges we spread the traffic across a very large number of edge servers. This architecture allows us to avoid having a single choke point because the traffic gets distributed externally across multiple datacenters and internally across multiple servers. We do that by employing Anycast and ECMP.

We don't use separate scrubbing boxes or specialized hardware - every one of our edge servers can perform advanced traffic filtering if the need arises. This allows us to scale up our DDoS capacity as we grow. Each of the new servers we add to our datacenters increases our maximum theoretical DDoS “scrubbing” power. It also scales down nicely - in smaller datacenters we don't have to overinvest in expensive dedicated hardware.

During normal operations our attitude to attacks is rather pragmatic. Since the inbound traffic is distributed across hundreds of servers we can survive periodic spikes and small attacks without doing anything. Vanilla Linux is remarkably resilient against unexpected network events. This is especially true since kernel 4.4 when the performance of SYN cookies was greatly improved.

But at some point, malicious traffic volume can become so large that we must take the load off the networking stack. We have to minimize the amount of CPU spent on dealing with attack packets. Cloudflare operates a multi-tenant service and we must always have enough processing power to serve valid traffic. We can't afford to starve our HTTP proxy (nginx) or custom DNS server (named RRDNS, written in Go) of CPU. When the attack size crosses a predefined threshold (which varies greatly depending on specific attack type), we must intervene.


During large attacks we deploy mitigations to reduce the CPU consumed by malicious traffic. We have multiple layers of defense, each tuned to specific attack vector.

First, there is “scattering”. Since we control DNS resolution we are able to move the domains we serve between IP addresses (we call this "scattering"). This is an effective technique as long as the attacks don’t follow the updated DNS resolutions. This often happens for L3 attacks where the attacker has hardcoded the IP address of the target.

Next, there is a wide range of mitigation techniques that leverage iptables, the firewall built in to the Linux kernel. But we don't treat use it like a conventional firewall, with a static set of rules. We continuously add, tweak and remove rules, based on specific attack characteristics. Over the years we have mastered the most effective iptables extensions:

  • xt_bpf
  • ipsets
  • hashlimits
  • connlimit

To make the most of iptables, we built a system to manage the iptables configuration across our entire fleet, allowing us to rapidly deploy rules everywhere. This fits our architecture nicely: due to Anycast, an attack against a single IP will be delivered to multiple locations. Running iptables rules for that IP on all servers makes sense.

Using stock iptables gives us plenty of confidence. When possible we prefer to use off-the-shelf tools to deal with attacks.

Sometimes though, even this is not sufficient. Iptables is fast in the general case, but has its limits. During very large attacks, exceeding 1M packets per second per server, we shift the attack traffic from kernel iptables to a kernel bypass user space program (which we call floodgate). We use a partial kernel bypass solution using Solarflare EF_VI interface. With this on each server we can process more than 5M attack packets per second while consuming only a single CPU core. With floodgate we have comfortable amount of CPU left for our applications, even during the largest network events.

Finally, there are a number of tweaks we can make on at the HTTP layer. For specific attacks we disable HTTP Keep-Alives forcing attackers to re-establish TCP sessions for each request. This sacrifices a bit of performance for valid traffic as well, but is a surprisingly powerful tool throttling many attacks. For other attack patterns we turn the “I’m under attack” mode on, forcing the attack to hit our JavaScript challenge page.

Manual attack handling

Early on these mitigations were applied manually by our tireless SREs. Unfortunately, it turns out that humans under stress... well, make mistakes. We learned it the hard way - one of the most famous incidents happened in March 2013 when a simple typo brought our whole network down.

Humans are also not great at applying precise rules. As our systems grew and mitigations became more complex, having many specific toggles, our SREs got overwhelmed by the details. It was challenging to present all the specific information about the attack to the operator. We often applied overly-broad mitigations, which were unnecessarily affecting legitimate traffic. All that changed with the introduction of Gatebot.

Meet Gatebot

To aid our SREs we developed a fully automatic mitigation system. We call it Gatebot1.

The main goal of Gatebot was to automate as much of the mitigation workflow as possible. That means: to observe the network and note the anomalies, understand the targets of attacks and their metadata (such as the type of customer involved), and perform appropriate mitigation action.

Nowadays we have multiple Gatebot instances - we call it them “mitigation pipelines”. Each pipeline has three parts:

1) “attack detection” or “signal” - A dedicated system detects anomalies in network traffic. This is usually done by sampling a small fraction of the network packets hitting our network, and analyzing them using streaming algorithms. With this we have a real-time view of the current status of the network. This part of the stack is written in Golang, and even though it only examines the sampled packets, it's pretty CPU intensive. It might comfort you to know that at this very moment two big Xeon servers burn all of their combined 48 Skylake CPU cores toiling away counting packets and performing sophisticated analytics looking for attacks.

2) “reactive automation” or “business logic”. For each anomaly (attack) we see who the target is, can we mitigate it, and with what parameters. Depending on the specific pipeline, the business logic may be anything from a trivial procedure to a multi-step process requiring a number of database lookups and potentially confirmation from a human operator. This code is not performance critical and is written in Python. To make it more accessible and readable by others in company, we developed a simple functional, reactive programming engine. It helps us to keep the code clean and understandable, even as we add more steps, more pipelines and more complex logic. To give you a flavor of the complexity: imagine how the system should behave if a customer upgraded a plan during an attack.

3) “mitigation”. The previous step feeds specific mitigation instructions into the centralized mitigation management systems. The mitigations are deployed across the world to our servers, applications, customer settings and, in some cases, to the network hardware.

Sleeping at night

Gatebot operates constantly, without breaks for lunch. For the iptables mitigations pipelines alone, Gatebot got engaged between 30 and 1500 times a day. Here is a chart of mitigations per day over last 6 months:

Gatebot is much faster and much more precise than even our most experienced SREs. Without Gatebot we wouldn’t be able to operate our service with the appropriate level of confidence. Furthermore, Gatebot has proved to be remarkably adaptable - we started by automating handling of Layer 3 attacks, but soon we proved that the general model works well for automating other things. Today we have more than 10 separate Gatebot instances doing everything from mitigating Layer 7 attacks to informing our Customer Support team of misbehaving customer origin servers.

Since Gatebot’s inception we learned greatly from the "detection / logic / mitigation" workflow. We reused this model in our Automatic Network System which is used to relieve network congestion2.

Gatebot allows us to protect our users no matter of the plan. Whether you are a on a Free, Pro, Business or Enterprise plan, Gatebot is working for you. This is why we can afford to provide the same level of DDoS protection for all our customers3.

Dealing with attacks sounds interesting? Join our world famous DDoS team in London, Austin, San Francisco and our elite office in Warsaw, Poland.

  1. Fun fact: all our components in this area are called “gate-something”, like: gatekeeper, gatesetter, floodgate, gatewatcher, gateman... Who said that naming things must be hard?

  2. Some of us have argued that this system should be called Netbot.

  3. Note: there are caveats. Ask your Success Engineer for specifics!

Categories: Technology

The History of Email

Sat, 23/09/2017 - 17:00
The History of Email

This was adapted from a post which originally appeared on the Eager blog. Eager has now become the new Cloudflare Apps.


— Text of the first email ever sent, 1971

The ARPANET (a precursor to the Internet) was created “to help maintain U.S. technological superiority and guard against unforeseen technological advances by potential adversaries,” in other words, to avert the next Sputnik. Its purpose was to allow scientists to share the products of their work and to make it more likely that the work of any one team could potentially be somewhat usable by others. One thing which was not considered particularly valuable was allowing these scientists to communicate using this network. People were already perfectly capable of communicating by phone, letter, and in-person meeting. The purpose of a computer was to do massive computation, to augment our memories and empower our minds.

Surely we didn’t need a computer, this behemoth of technology and innovation, just to talk to each other.

The History of Email The computers which sent (and received) the first email.

The history of computing moves from massive data processing mainframes, to time sharing where many people share one computer, to the diverse collection of personal computing devices we have today. Messaging was first born in the time sharing era, when users wanted the ability to message other users of the same time shared computer.

Unix machines have a command called write which can be used to send messages to other currently logged-in users. For example, if I want to ask Mark out to lunch:

$ write mark write: mark is logged in more than once; writing to ttys002 Hi, wanna grab lunch?

He will see:

Message from zack@Awesome-Mainframe.local on ttys003 at 10:36 ... Hi, wanna grab lunch?

This is absolutely hilarious if your coworker happens to be using a graphical tool like vim which will not take kindly to random output on the screen.

Persistant Messages

When the mail was being developed, nobody thought at the beginning it was going to be the smash hit that it was. People liked it, they thought it was nice, but nobody imagined it was going to be the explosion of excitement and interest that it became. So it was a surprise to everybody, that it was a big hit.

— Frank Heart, director of the ARPANET infrastructure team

An early alternative to Unix called Tenex took this capability one step further. Tenex included the ability to send a message to another user by writing onto the end of a file which only they could read. This is conceptually very simple, you could implement it yourself by creating a file in everyones home directory which only they can read:

mkdir ~/messages chmod 0442 ~/messages

Anyone who wants to send a message just has to append to the file:

echo "?????\n" >> /Users/zack/messages

This is, of course, not a great system because anyone could delete your messages! I trust the Tenex implementation (called SNDMSG) was a bit more secure.


In 1971, the Tenex team had just gotten access to the ARPANET, the network of computers which was a main precursor to the Internet. The team quickly created a program called CPYNET which could be used to send files to remote computers, similar to FTP today.

One of these engineers, Ray Tomlinson, had the idea to combine the message files with CPYNET. He added a command which allowed you to append to a file. He also wired things up such that you could add an @ symbol and a remote machine name to your messages and the machine would automatically connect to that host and append to the right file. In other words, running:

SNDMSG zack@cloudflare

Would append to the /Users/zack/messages file on the host cloudflare. And email was born!


The CPYNET format did not have much of a life outside of Tenex unfortunately. It was necessary to create a standard method of communication which every system could understand. Fortunately, this was also the goal of another similar protocol, FTP. FTP (the File Transfer Protocol) sought to create a single way by which different machines could transfer files over the ARPANET.

FTP originally didn’t include support for email. Around the time it was updated to use TCP (rather than the NCP protocol which ARPANET historically used) the MAIL command was added.

$ ftp < open bbn > 220 HELLO, this is the BBN mail service < MAIL zack > 354 Type mail, ended by <CRLF>.<CRLF> < Sup? < . > 250 Mail stored

These commands were ultimately borrowed from FTP and formed the basis for the SMTP (Simple Mail Transfer Protocol) protocol in 1982.


The format for defining how a message should be transmitted (and often how it would be stored on disk) was first standardized in 1977:

Date : 27 Aug 1976 0932-PDT From : Ken Davis <KDavis at Other-Host> Subject : Re: The Syntax in the RFC To : George Jones <Group at Host>, Al Neuman at Mad-Host There’s no way this is ever going anywhere...

Note that at this time the ‘at’ word could be used rather than the ‘@’ symbol. Also note that this use of headers before the message predates HTTP by almost fifteen years. This format remains nearly identical today.

The Fifth Edition of Unix used a very similar format for storing a users email messages on disk. Each user would have a file which contained their messages:

From MAILER-DAEMON Fri Jul 8 12:08:34 1974 From: Author <> To: Recipient <> Subject: Save $100 on floppy disks They’re never gonna go out of style! From MAILER-DAEMON Fri Jul 8 12:08:34 1974 From: Author <> To: Recipient <> Subject: Seriously, buy AAPL You’ve never heard of it, you’ve never heard of me, but when you see that stock symbol appear. Buy it. - The Future

Each message began with the word ‘From’, meaning if a message happened to contain From at the beginning of a line it needed to be escaped lest the system think that’s the start of a new message:

From MAILER-DAEMON Fri Jul 8 12:08:34 2011 From: Author <> To: Recipient <> Subject: Sample message 1 This is the body. >From (should be escaped). There are 3 lines.

It was technically possible to interact with your email by simply editing your mailbox file, but it was much more common to use an email client. As you might expect there was a diversity of clients available, but a few are of historical note.

RD was an editor which was created by Lawrence Roberts who was actually the program manager for the ARPANET itself at the time. It was a set of macros on top of the Tenex text editor (TECO), which itself would later become Emacs.

RD was the first client to give us the ability to sort messages, save messages, and delete them. There was one key thing missing though: any integration between receiving a message and sending one. RD was strictly for consuming emails you had received, to reply to a message it was necessary to compose an entirely new message in SNDMSG or another tool.

That innovation came from MSG, which itself was an improvement on a client with the hilarious name BANANARD. MSG added the ability to reply to a message, in the words of Dave Crocker:

My subjective sense was that propagation of MSG resulted in an exponential explosion of email use, over roughly a 6-month period. The simplistic explanation is that people could now close the Shannon-Weaver communication loop with a single, simple command, rather than having to formulate each new message. In other words, email moved from the sending of independent messages into having a conversation.

Email wasn’t just allowing people to talk more easily, it was changing how they talk. In the words of C. R. Linklider and Albert Vezza in 1978:

One of the advantages of the message systems over letter mail was that, in an ARPANET message, one could write tersely and type imperfectly, even to an older person in a superior position and even to a person one did not know very well, and the recipient took no offense... Among the advantages of the network message services over the telephone were the fact that one could proceed immediately to the point without having to engage in small talk first, that the message services produced a preservable record, and that the sender and receiver did not have to be available at the same time.

The most popular client from this era was called MH and was composed of several command line utilities for doing various actions with and to your email.

$ mh % show (Message inbox:1) Return-Path: joed Received: by (5.54/ACS) id AA08581; Mon, 09 Jan 1995 16:56:39 EST Message-Id: <> To: angelac Subject: Here’s the first message you asked for Date: Mon, 09 Jan 1995 16:56:37 -0600 From: "Joe Doe" <joed> Hi, Angela! You asked me to send you a message. Here it is. I hope this is okay and that you can figure out how to use that mail system. Joe

You could reply to the message easily:

% repl To: "Joe Doe" <joed> cc: angelac Subject: Re: Here’s the first message you asked for In-reply-to: Your message of "Mon, 09 Jan 1995 16:56:37 -0600." <> ------- % edit vi

You could then edit your reply in vim which is actually pretty cool.

Interestingly enough, in June of 1996 the guide “MH & xmh: Email for Users & Programmers” was actually the first book in history to be published on the Internet.

Pine, Elm & Mutt

All mail clients suck. This one just sucks less.

— Mutt Slogan

It took several years until terminals became powerful enough, and perhaps email pervasive enough, that a more graphical program was required. In 1986 Elm was introduced, which allowed you to interact with your email more interactively.

The History of Email Elm Mail Client

This was followed by more graphical TUI clients like Mutt and Pine.

In the words of the University of Washington’s Pine team:

Our goal was to provide a mailer that naive users could use without fear of making mistakes. We wanted to cater to users who were less interested in learning the mechanics of using electronic mail than in doing their jobs; users who perhaps had some computer anxiety. We felt the way to do this was to have a system that didn’t do surprising things and provided immediate feedback on each operation; a mailer that had a limited set of carefully-selected functions.

These clients were becoming gradually easier and easier to use by non-technical people, and it was becoming clear how big of a deal this really was:

We in the ARPA community (and no doubt many others outside it) have come to realize that we have in our hands something very big, and possibly very important. It is now plain to all of us that message service over computer networks has enormous potential for changing the way communication is done in all sectors of our society: military, civilian government, and private.


Its like when I did the referer field. I got nothing but grief for my choice of spelling. I am now attempting to get the spelling corrected in the OED since my spelling is used several billion times a minute more than theirs.

— Phillip Hallam-Baker on his spelling of ’Referer’ 2000

The first webmail client was created by Phillip Hallam-Baker at CERN in 1994. Its creation was early enough in the history of the web that it led to the identification of the need for the Content-Length header in POST requests.

Hotmail was released in 1996. The name was chosen because it included the letters HTML to emphasize it being ‘on the web’ (it was original stylized as ‘HoTMaiL’). When it was launched users were limited to 2MB of storage (at the time a 1.6GB hard drive was $399).

Hotmail was originally implemented using FreeBSD, but in a decision I’m sure every engineer regretted, it was moved to Windows 2000 after the service was bought by Microsoft. In 1999, hackers revealed a security flaw in Hotmail that permitted anybody to log in to any Hotmail account using the password ‘eh’. It took until 2001 for ‘hackers’ to realize you could access other people’s messages by swap usernames in the URL and guessing at a valid message number.

The History of Email

Gmail was famously created in 2004 as a ‘20% project’ of Paul Buchheit. Originally it wasn’t particularly believed in as a product within Google. They had to launch using a few hundred Pentium III computers no one else wanted, and it took three years before they had the resources to accept users without an invitation. It was notable both for being much closer to a desktop application (using AJAX) and for the unprecedented offer of 1GB of mail storage.

The Future The History of Email US Postal Mail Volume, KPCB

At this point email is a ubiquitous enough communication standard that it’s very possible postal mail as an everyday idea will die before I do. One thing which has not survived well is any attempt to replace email with a more complex messaging tool like Google Wave. With the rise of more targeted communication tools like Slack, Facebook, and Snapchat though, you never know.

There is, of course, a cost to that. The ancestors of the Internet were kind enough to give us a communication standard which is free, transparent, and standardized. It would be a shame to see the tech communication landscape move further and further into the world of locked gardens and proprietary schemas.

We’ll leave you with two quotes:

Mostly because it seemed like a neat idea. There was no directive to ‘go forth and invent e-mail’.

— Ray Tomlinson, answering a question about why he invented e-mail

Permit me to carry the doom-crying one step further. I am curious whether the increasingly easy access to computers by adolescents will have any effect, however small, on their social development. Keep in mind that the social skills necessary for interpersonal relationships are not taught; they are learned by experience. Adolescence is probably the most important time period for learning these skills. There are two directions for a cause-effect relationship. Either people lacking social skills (shy people, etc.) turn to other pasttimes, or people who do not devote enough time to human interactions have difficulty learning social skills. I do not [consider] whether either or both of these alternatives actually occur. I believe I am justified in asking whether computers will compete with human interactions as a way of spending time? Will they compete more effectively than other pasttimes? If so, and if we permit computers to become as ubiquitous as televisions, will computers have some effect (either positive or negative) on personal development of future generations?

— Gary Feldman, 1981

  • Use Cloudflare Apps to build tools which can be installed by millions of sites.

    Build an app →

    If you're in San Francisco, London or Austin: work with us.

  • Our next post is on the history of the URL!
    Get notified when new apps and apps-related posts are released:

    Email Address
(function($) {window.fnames = new Array(); window.ftypes = new Array();fnames[0]='EMAIL';ftypes[0]='email';fnames[1]='FNAME';ftypes[1]='text';fnames[2]='LNAME';ftypes[2]='text';}(jQuery));var $mcj = jQuery.noConflict(true);

/* Social */ .social { margin-top: 1.3em; } .fb_iframe_widget { padding-right: 1px; } .IN-widget { padding-left: 11px; } /* Hide period after author */ .post-header .meta a { border-right: 5px solid white; margin-right: -5px; position: relative; } /* Post */ body { background-color: white; } pre, code { font-size: inherit; line-height: inherit; } section.primary-content { font-size: 16px; line-height: 1.6; color: black; } blockquote { padding-bottom: 1.5em; padding-top: 1em; font-style: italic; font-size: 1.25rem; } blockquote.pull-quote-centered { font-size: 1.2em; text-align: center; max-width: 100%; margin-left: auto; margin-right: auto; } blockquote blockquote { margin-left: 1em; padding-left: 1em; border-left: 5px solid rgba(0, 0, 0, 0.2); padding-bottom: 0.5em; padding-top: 0.5em; margin-bottom: 0.5em; margin-top: 0.5em; } figure.standard { position: relative; max-width: 100%; margin: 1em auto; text-align: center; z-index: -1; } .figcaption { padding-top: .5em; font-size: .8em; color: #888; font-weight: 300; letter-spacing: .03em; line-height: 1.35; } .figcontent { display: inline-block; } p.attribution { color: #666; font-size: 0.9em; padding-bottom: 1em; } a code.year { text-decoration: underline; } .closing-cards #mc_embed_signup .mc-field-group { margin: 0.75em 0; } .closing-cards #mc_embed_signup input { font-size: 1.5em; height: auto; } .closing-cards #mc_embed_signup input[type="email"] { border: 1px solid #bcbcbc; border-radius: 2px; margin-bottom: 0; } .closing-cards #mc_embed_signup input[type="submit"] { background: #f38020; color: #fff; padding: .8em 1em .8em 1em; white-space: nowrap; line-height: 1.2; text-align: center; border-radius: 2px; border: 0; display: inline-block; text-rendering: optimizeLegibility; -webkit-tap-highlight-color: transparent; -webkit-font-smoothing: subpixel-antialiased; user-select: none; -webkit-appearance: none; appearance: none; letter-spacing: .04em; text-indent: .04em; cursor: pointer; } .closing-cards #mc_embed_signup div.mce_inline_error { background-color: transparent; color: #C33; padding: 0; display: inline-block; font-size: 0.9em; } .closing-cards #mc_embed_signup p:not(:empty) { line-height: 1.5; margin-bottom: 2em; } .closing-cards #mc_embed_signup input[type="email"] { font-size: 20px !important; width: 100% !important; padding: .6em 1em !important; } .closing-cards #mc_embed_signup .mc-field-group { margin: 0 !important; } .closing-cards #mc_embed_signup input[type="submit"] { font-size: 20px !important; margin-top: .5em !important; padding: .6em 1em !important; } .closing-cards #mc_embed_signup div.mce_inline_error { padding: 0; margin: 0; color: #F38020 !important; } aside.section.learn-more { display: none; } .closing-cards { background: #eee; width: 100%; list-style-type: none; margin-left: 0; } .closing-card { width: calc(50% - 10px) !important; font-size: 20px; padding: 1.5em; display: inline-block; box-sizing: border-box; vertical-align: top; } @media (max-width: 788px){ .closing-card { width: 100% !important; } .closing-card + .closing-card { border-top: 10px solid white; } }
Categories: Technology

A New API Binding: cloudflare-php

Sat, 23/09/2017 - 01:01


Back in May last year, one of my colleagues blogged about the introduction of our Python binding for the Cloudflare API and drew reference to our other bindings in Go and Node. Today we are complimenting this range by introducing a new official binding, this time in PHP.

This binding is available via Packagist as cloudflare/sdk, you can install it using Composer simply by running composer require cloudflare/sdk. We have documented various use-cases in our "Cloudflare PHP API Binding" KB article to help you get started.

Alternatively should you wish to help contribute, or just give us a star on GitHub, feel free to browse to the cloudflare-php source code.


PHP is a controversial language, and there is no doubt there are elements of bad design within the language (as is the case with many other languages). However, love it or hate it, PHP is a language of high adoption; as of September 2017 W3Techs report that PHP is used by 82.8% of all the websites whose server-side programming language is known. In creating this binding the question clearly wasn't on the merits of PHP, but whether we wanted to help drive improvements to the developer experience for the sizeable number of developers integrating with us whilst using PHP.

In order to help those looking to contribute or build upon this library, I write this blog post to explain some of the design decisions made in putting this together.

Exclusively for PHP 7

PHP 5 initially introduced the ability for type hinting on the basis of classes and interfaces, this opened up (albeit seldom used) parametric polymorphic behaviour in PHP. Type hinting on the basis of interfaces made it easier for those developing in PHP to follow the Gang of Four's famous guidance: "Program to an 'interface', not an 'implementation'."

Type hinting has slowly developed in PHP, in PHP 7.0 the ability for Scalar Type Hinting was released after a few rounds of RFCs. Additionally PHP 7.0 introduced Return Type Declarations, allowing return values to be type hinted in a similar way to argument type hinting. In this library we extensively use Scalar Type Hinting and Return Type Declarations thereby restricting the backward compatibility that's available with PHP 5.

In order for backward compatibility to be available, these improvements to type hinting simply would not be implementable and the associated benefits would be lost. With Active Support no longer being offered to PHP 5.6 and Security Support little over a year away from disappearing for the entirety of PHP 5.x, we decided the additional coverage wasn't worth the cost.


Object Composition

What do we mean by a software architecture? To me the term architecture conveys a notion of the core elements of the system, the pieces that are difficult to change. A foundation on which the rest must be built. Martin Fowler

When getting started with this package, you'll notice there are 3 classes you'll need to instantiate:

$key = new \Cloudflare\API\Auth\APIKey('', 'apiKey'); $adapter = new Cloudflare\API\Adapter\Guzzle($key); $user = new \Cloudflare\API\Endpoints\User($adapter); echo $user->getUserID();

The first class being instantiated is called APIKey (a few other classes for authentication are available). We then proceed to instantiate the Guzzle class and the APIKey object is then injected into the constructor of the Guzzle class. The Auth interface that the APIKey class implements is fairly simple:

namespace Cloudflare\API\Auth; interface Auth { public function getHeaders(): array; }

The Adapter interface (which the Guzzle class implements) makes explicit that an object built on the Auth interface is expected to be injected into the constructor:

namespace Cloudflare\API\Adapter; use Cloudflare\API\Auth\Auth; use Psr\Http\Message\ResponseInterface; interface Adapter { ... public function __construct(Auth $auth, String $baseURI); ... }

In doing so; we define that classes which implement the Adapter interface are to be composed using objects made from classes which implement the Auth interface.

So why am I explaining basic Dependency Injection here? It is critical to understand as the design of our API changes, the mechanisms for Authentication may vary independently of the HTTP Client or indeed API Endpoints themselves. Similarly the HTTP Client or the API Endpoints may vary independently of the other elements involved. Indeed, this package already contains three classes for the purpose of authentication (APIKey, UserServiceKey and None) which need to be interchangeably used. This package therefore considers the possibility for changes to different components in the API and seeks to allow these components to vary independently.

Dependency Injection is also used where the parameters for an API Endpoint become more complicated then what is permitted by simpler variables types; for example, this is done for defining the Target or Configuration when configuring a Page Rule:

require_once('vendor/autoload.php'); $key = new \Cloudflare\API\Auth\APIKey('', 'apiKey'); $adapter = new Cloudflare\API\Adapter\Guzzle($key); $zones = new \Cloudflare\API\Endpoints\Zones($adapter); $zoneID = $zones->getZoneID(""); $pageRulesTarget = new \Cloudflare\API\Configurations\PageRulesTargets('*'); $pageRulesConfig = new \Cloudflare\API\Configurations\PageRulesActions(); $pageRulesConfig->setCacheLevel('bypass'); $pageRules = new \Cloudflare\API\Endpoints\PageRules($adapter); $pageRules->createPageRule($zoneID, $pageRulesTarget, $pageRulesConfig, true, 6);

The structure of this project is overall based on simple object composition; this provides a far more simple object model for the long-term and a design that provides higher flexibility. For example; should we later want to create an Endpoint class which is a composite of other Endpoints, it becomes fairly trivial for us to build this by implementing the same interface as the other Endpoint classes. As more code is added, we are able to keep the design of the software relatively thinly layered.

Testing/Mocking HTTP Requests

If you're interesting in helping contribute to this repository; there are two key ways you can help:

  1. Building out coverage of endpoints on our API
  2. Building out test coverage of those endpoint classes

The PHP-FIG (PHP Framework Interop Group) put together a standard on how HTTP responses can be represented in an interface, this is described in the PSR-7 standard. This response interface is utilised by our HTTP Adapter interface in which responses to API requests are type hinted to this interface (Psr\Http\Message\ResponseInterface).

By using this standard, it's easier to add further abstractions for additional HTTP clients and mock HTTP responses for unit testing. Let's assume the JSON response is stored in the $response variable and we want to test the listIPs method in the IPs Endpoint class:

public function testListIPs() { $stream = GuzzleHttp\Psr7\stream_for($response); $response = new GuzzleHttp\Psr7\Response(200, ['Content-Type' => 'application/json'], $stream); $mock = $this->getMockBuilder(\Cloudflare\API\Adapter\Adapter::class)->getMock(); $mock->method('get')->willReturn($response); $mock->expects($this->once()) ->method('get') ->with($this->equalTo('ips'), $this->equalTo([]) ); $ips = new \Cloudflare\API\Endpoints\IPs($mock); $ips = $ips->listIPs(); $this->assertObjectHasAttribute("ipv4_cidrs", $ips); $this->assertObjectHasAttribute("ipv6_cidrs", $ips); }

We are able to build a simple mock of our Adapter interface by using the standardised PSR-7 response format, when we do so we are able to define what parameters PHPUnit expects to be passed to this mock. With a mock Adapter class in place we are able to test the IPs Endpoint class as any if it was using a real HTTP client.


Through building on modern versions of PHP, using good Object-Oriented Programming theory and allowing for effective testing we hope our PHP API binding provides a developer experience that is pleasant to build upon.

If you're interesting in helping improve the design of this codebase, I'd encourage you to take a look at the PHP API binding source code on GitHub (and optionally give us a star).

If you work with Go or PHP and you're interested in helping Cloudflare turn our high-traffic customer-facing API into an ever more modern service-oriented environment; we're hiring for Web Engineers in San Francisco, Austin and London.

Categories: Technology

Project Jengo Strikes Its First Targets (and Looks for More)

Thu, 21/09/2017 - 17:02

Jango Fett by Brickset (Flickr)

When Blackbird Tech, a notorious patent troll, sued us earlier this year for patent infringement, we discovered quickly that the folks at Blackbird were engaged in what appeared to be the broad and unsubstantiated assertion of patents -- filing about 115 lawsuits in less than 3 years, and have not yet won a single one of those cases on the merits in court. Cloudflare felt an appropriate response would be to review all of Blackbird Tech’s patents, not just the one it asserted against Cloudflare, to determine if they are invalid or should be limited in scope. We enlisted your help in this endeavor by placing a $50,000 bounty on prior art that proves the Blackbird Tech patents are invalid or overbroad, an effort we dubbed Project Jengo.

Since its inception, Project Jengo has doubled in size and provided us with a good amount of high quality prior art submissions. We have received more than 230 submissions so far, and have only just begun to scratch the surface. We have already come across a number of standouts that appear to be strong contenders for invalidating many of the Blackbird Tech patents. This means it is time for us to launch the first formal challenge against a Blackbird patent (besides our own), AND distribute the first round of the bounty to 15 recipients totaling $7,500.

We’re just warming up. We provide information below on how you can identify the next set of patents to challenge, help us find prior art to invalidate those targets, and collect a bit of the bounty for yourselves.

I. Announcing Project Jengo’s First Challenges (and Awards!)

We wrote previously about the avenues available to challenge patents short of the remarkable cost and delay of federal court litigation; the exact cost and delay that some Blackbird targets are looking to avoid through settlement. Specifically, we explained the process of challenging patents through inter partes review (“IPR”) and ex parte reexamination (“EPR”).

Based on the stellar Prior Art submissions, we have identified the first challenge against a Blackbird patent.

U.S. Patent 7,797,448 (“GPS-internet Linkage”)

The patent, which has a priority date of October 28, 1999, describes in broad and generic terms “[a]n integrated system comprising the Global Positioning System and the Internet wherein the integrated system can identify the precise geographic location of both sender and receiver communicating computer terminals.” It is not hard to imagine that such a broadly-worded patent could potentially be applied against a massive range of tech products that involve any GPS functionality. The alarmingly simplistic description of the patented innovation is confirmed by the only image submitted in support of the patent application, which shows only two desktop computers, a hovering satellite, and a triangle of dotted lines connecting the three items.

Blackbird filed suit in July 2016 against six companies asserting this ‘448 patent. All of those cases were voluntarily dismissed by Blackbird within three months -- fitting a pattern where Blackbird was only looking for small settlements from defendants who sought to avoid the costs and delays of litigation. A successful challenge that invalidates or limits the scope of this patent could put an end to such practices.

Project Jengo’s Discovery - The patent claims priority to a provisional application filed October 28, 1999, but Project Jengo participants sourced four different submissions that raise serious questions about the novelty of the ‘448 patent when it was filed:

  • Research literature from April 1999 describing a system utilizing GPS cards for addressing terminals connected to the internet. “GPS-Based Geographic Addressing, Routing, and ResourceDiscovery,” Tomasz Imielinski and Julio C. Navos, Vol 42, No. 4 COMMUNICATIONS OF THE ACM (pgs. 86-92).

  • A request for comment from the Internet Engineering Task Force on a draft research paper from November 1996 on “integrating GPS-based geographic information into the Internet Protocol.” IETF RFC 2009

  • One submission included seven patents that all pre-date the priority date of the ‘448 patent (as early as July 1997) and address similar--yet more specific--efforts to use GPS location systems with computer systems.

  • And on a less-specific but still relevant basis, one submitter points to the APRS system that has been used by Ham Radio enthusiasts and has tagged communications with GPS location for decades.

Project Jengo participants who provided these submissions will each be given an award of $500!

What we plan to do -- Because this patent is written (and illustrated) in such broad terms, Blackbird has shown a willingness to sue under this patent, and Project Jengo has uncovered significant prior art, we think this case provides a promising basis to challenge the ‘448 patent. We are preparing an ex parte reexamination of the ‘448 patent, which we expect to file with the US Patent and Trademark Office in October. Again, you can read about an ex parte challenge here. We expect that after review, the USPTO will invalidate the patent. Although future challenges may be funded through crowdsourcing or other efforts, we will be able to fund this challenge fully through funds already set aside for Project Jengo, even though this patent doesn’t implicate Cloudflare’s services.

US Patent 6,453,335 (the one asserted against Cloudflare)

Project Jengo participants have also done an incredible job identifying relevant prior art on the patent asserted against Cloudflare by Blackbird Tech. Blackbird claims that the patent describes a system for monitoring an existing data channel and inserting error pages when transmission rates fall below a certain level. We received a great number of submissions on that patent and are continuing our analysis.

Cloudflare recently filed a brief with the U.S. District Court in which we pointed to eleven pieces of prior art submitted by Jengo participants that we expect will support invalidity in the litigation:

Bounty hunters who first submitted this prior art that was already used in the case will each receive $500. The Project Jengo Team at Cloudflare is continuing analysis of all the prior art submissions, and we still need your help! The litigation is ongoing and we will continue to provide a bounty to prior art submissions that are used to invalidate the Blackbird patents.

The Search Goes On… with new armor

These challenges to Blackbird patents are only the start. Later in this blog post, we provide an extensive report on the status of the search for prior art on all the Blackbird patents, and include a number of new patents we’ve uncovered. Keep looking for prior art on the Blackbird patents, we still have plenty of bounties to award and a number of patents ripe for a challenge. You can send us your prior art submissions here.

Even if you didn’t receive a cash award (yet), our t-shirts are about to hit the streets! Everyone who submitted prior art to Project Jengo will be receiving a t-shirt. If you previously made a submission, we’ve emailed you instructions for ordering your shirt. This offer will remain open for the duration of Project Jengo for anyone that submits new prior art on any of the Blackbird patents. Enjoy your new armor!

II. Elsewhere in Project Jengo...

Ethics complaint update

We know Blackbird’s “new model” is dangerous to innovation and merits scrutiny, so we previously lodged ethics complaints against Blackbird Tech with the bar disciplinary committees in Massachusetts and Illinois. This week, we sent an additional letter to the USPTO’s Office of Enrollment and Discipline asking them to look into possible violations of the USPTO Rules of Professional Conduct. As with the other jurisdictions, the USPTO Rules of Professional Conduct prohibit attorneys from acquiring a proprietary interest in the lawsuit (Rule §11.108(i)), sharing fees or equity with non-lawyers (Rules 11.504(a) and 11.504(d). Blackbird’s “new model” seems to violate these ethical standards.

Getting the word out
Cloudflare’s Project Jengo continues to drive conversation about the corrosive problem of patent trolls. Since our last blog update, our efforts have continued to draw attention in the press. For the latest, you can see...

“The hunted becomes the hunter: How Cloudflare’s fight with a ‘patent troll’ could alter the game,” -- TechCrunch

“Cloudflare gets another $50,000, to fight ‘new breed of patent troll,’” -Ars Technica

“This 32-year-old state senator is trying to get patent trolls out of Massachusetts,” -- TechCrunch

III. A Progress Report on Challenges to the Blackbird Patents

As you continue your search for prior art as part of Project Jengo, we’ve updated our chart of Blackbird patents, and identified a number of new patents and applications we’ve found that Blackbird has acquired.

As reflected on the chart (in red), so far 5 of the patents are being challenged or have been invalidated. In addition to our pending challenge of the ‘448 patent:

  • In June 2016, Blackbird Tech sued software maker kCura LLC and nine of its resellers for allegedly infringing U.S. Patent 7,809717, which was described as a Method and Apparatus for Concept-based Visual Presentation of Search Results. kCura makes specialized software used by law firms during document review. The judge in kCura’s case invalidated every claim in the ‘717 patent because the “abstract idea” of using a computer instead of a lawyer to perform document review cannot be patented.

  • US Patent 6,434,212 -- This patent seeks protection for “a pedometer having improved accuracy by calculating actual stride lengths.” Numerous challenges to this patent have been filed with the Patent Trial and Appeal Board (PTAB), which adjudicates some IPR challenges. There are currently challenges against this “Pedometer” patent that have been filed by Garmin, TomTom and Fitbit.

  • US Patent 7,129,931 -- This patent for a “multipurpose computer display system” is undergoing IPR challenge brought by Lenovo, Inc.

  • US Patent 7,174,362 -- This patent for a “method and system for supplying products from pre-stored digital data in response to demands transmitted via computer network” was challenged by Unified Patents, Inc.

In the charts below, we’ve highlighted 11 Blackbird patents (in green) that seem ripe for challenge -- based on a combination of the fact that they seem broadly applicable to important industries, may have already been the basis of a Blackbird lawsuit, and/or already have some valuable prior art sourced through Project Jengo. We’ll take submissions on any Blackbird patent, but these are the patents we’re focused on and should get extra attention from Project Jengo participants seeking a bounty.

After our review is a bit further down the road, we’ll make all the prior art we’ve received on these patents available to the public so that anyone facing a challenge from Blackbird can defend themselves. We hope to have that information posted by the end of October.

And finally, Cloudflare is funding the first ex parte challenge fully out of funds it has set aside or had donated to Project Jengo. Should any of these patents hit home for you, and you are interested in supporting this fight financially, please reach out to

-Happy Hunting!

Categories: Technology

#FuerzaMexico: A way to help Mexico Earthquake victims

Wed, 20/09/2017 - 14:17
#FuerzaMexico: A way to help Mexico Earthquake victims

#FuerzaMexico: A way to help Mexico Earthquake victims Photo Credit: United Nations Photo (Flickr)

On September 19, 1985 Mexico City was hit with the most damaging earthquake in its history. Yesterday -exactly 32 years later- Mexico’s capital and neighbouring areas were hit again by a large earthquake that caused significant damage. While the scale of the destruction is still being assessed, countless people passed away and the lives of many have been disrupted. Today, many heroes are on the streets focusing on recovery and relief.

We at Cloudflare want to make it easy for people to help out those affected in central Mexico. The Mexico Earthquake app will allow visitors to your site to donate to one of the charities helping those impacted.

#FuerzaMexico: A way to help Mexico Earthquake victims

The Mexico Earthquake App takes two clicks to install and requires no code change. The charities listed are two well respected organizations that are on the ground helping people now.

Install Now

If you wanted to add your own custom list of charities for disaster relief or other causes, feel free to fork the source of this app and make your own.

#FuerzaMéxico: Una manera de apoyar a los damnificados del SismoMX

El 19 de septiembre de 1985 la Ciudad de México fue afectada por uno de los peores sismos en su historia. Ayer - exactamente 32 años después - la CDMX y áreas circunvecinas fueron afectadas por otro fuerte sismo. Aunque la escala de la destrucción todavía no se conoce a fondo, muchísimas personas han sufrido daños. Miles de héroes mexicanos se enfocan en búsqueda, rescate y reconstrucción.

En Cloudflare queremos poner nuestro granito de arena y asegurarnos que los donativos para los afectados puedan llegar de forma fácil. Nuestra app Mexico Earthquake permitirá a aquellos que visitan tu sitio web que donen a asociaciones civiles que apoyan a los damnificados.

Install Now

Si quieres agregar otras organizaciones y/o caridades, puedes modificar el código fuente disponible aquí.

Categories: Technology

Cloudflare and Google Offer App Developers $100,000 in Cloud Platform Credits

Tue, 19/09/2017 - 14:04

Cloudflare and Google Cloud Platform logos

When Cloudflare started, our company needed two things: an initial group of users, and the finances to fund our development. We know most developers face the same issues. The Cloudflare Apps Platform solves the first problem by allowing third parties to develop applications that can be delivered across Cloudflare's edge network to any of the six million sites powered by Cloudflare. The Cloudflare Developer Fund alleviates the second by giving developers the financial support they need to fund their company. Today, we are excited to announce another initiative that will make it possible for developers to make their app dreams a reality.

Cloudflare and Google Cloud are working together to offer developers the resources needed to quickly launch and scale Cloudflare Apps. This partnership will give any Cloudflare Apps developer the chance to access a wide range of benefits including $3k - $100k of Google Cloud Platform (GCP) for one year at no cost. Some startups will also be eligible for 24/7 technical support, and access to GCP’s technical solutions team. This supports a core belief of the Cloudflare Apps initiative: we want developers to focus on building great Apps, not worry about paying for infrastructure. Hundreds of startups have already built successful applications on Cloudflare Apps and those applications have grown to serve hundreds of thousands of users. This program with Google Cloud significantly decreases the friction to getting up and running on Cloudflare Apps, allowing the next generation of developers and startups to make their living by building Apps.

How does it work?

$100k for Exceptional Apps: After an approval process your App could be awarded $20k in Cloud Credits, extendable to $100k based on usage in the first year.

Up to $3,000 for early stage startups: If you are an early-stage startup you are entitled to a $3,000 Google Cloud credit. Even if you aren't quite a startup yet, you are entitled to $500 if you are a first-time Google Cloud Platform user, and $200 if you are an existing user.

Collect your credits now!

Categories: Technology

Truth Lives in the Open: Lessons from Wikipedia

Fri, 15/09/2017 - 01:33

Victoria Coleman, CTO, Wikimedia Foundation

Moderator: Michelle Zatlyn, Co-Founder & COO, Cloudflare

Photo by Cloudflare Staff

MZ: What is the Wikimedia Foundation?

VC: We pride ourselves in aiming to make available information broadly

We’re the 5th most visited site on the planet.
We are the guardians of the project. There are 12 projects that we support, Wikipedia is the most prominent but there are others that will be just as influential in the next 5 years: e.g. Wikidata.
299 languages

Let’s also talk about the things that we don’t do: we don’t do editing. We edit as community members but not as members of the foundation.

We don’t monetize our users, content, or presence. We are completely funded by donations, with an average donation of $15.

MZ: If your mission is to help bring free education to all, getting to everyone can be hard. So how do you get access to people in hard-to-reach areas?

VC: It’s definitely a challenge. We built this movement primarily in NA and EU, but our vision goes beyond that. We started doing some critically refined and focused research in Brazil, Mexico, Nigeria.

Trying to understand what global communities need in other parts of the world.

We found that some people don’t know who we are, so we need to communicate to these people who we are.

MZ: We just heard on the last panel, and the notion of fake news came up. What is the foundation’s point of view around fake news? How can you give us hope for the future?

VC: First of all, the Foundation does not deal in news. One of our core principles is knowledge / notions? existing knowledge. What we do do is make it reliable as we can possibly make it. We have a community of 200,000 editors.

In our community, we live by principles: reliability of the source (“citation needed”),
maintain and ask our community of more than 200,000 people to make sure these principles are upheld. We are vocal and we hold each other accountable.
“Democracy dies in darkness.” “Truth thrives in openness.” We create quality content through openness.

MZ: When something controversial is posted on Wikipedia, how quickly does it get pulled?

VC: It depends on how front of mind the topic is, Sometimes in seconds.

Content that is incorrect very rarely persists past a week or month.

Medicine and Military history are the two most popular Wiki topics.

ER doctor is one of our most prolific editors; he said that if I can edit Wikipedia, i can reach 45 million people a month.

MZ: One of the reasons I went into tech rather than the medical field was because it was another way to help people at scale. Everything on Wiki has to have a source, a citation. But that must be hard. What are the implications for this?

VC: We take that very seriously. This past June, we were able to liberate 45% of all citations from the platform. Suddenly 60 million citations became available for everybody to use. This is very important material for research.

Being able to share the citations e.g. about Zika virus is what allowed this community to accelerate finding solutions. We advocate vociferously for openness, content that is not behind the wall.

Awhile ago they decided not to allow citations or references to the daily mail in the encyclopedia because they felt that as a source of news, it was less reliable.

MZ: Has that since been reversed?

VC: I don’t believe so.

MZ: You mentioned that Foundation builds other tools; what are some of the other open-source tools you are building that our audience might find useful?

VC: For example, Media Wiki is being used by Department of Energy, Intelligence community. the intelligence community has a product called Intellipedia that gets 350,000 hits per day. Another way of making tools through which people share knowledge.

Another example is ToolForge: taking data sets and making them available to volunteers who write tools.

So you come to us and we will give you what you need; not just computing and storage but data sets to work with. And people make magic...

MZ: The Foundation is a study in people coming together around the world--example of optimism. Wikipedia is one of top 5 sites; how do you keep that position? What’s next for the foundation?

VC: We want to continue to scale. It’s a matter of a lot of introspection. This will tell you about how you work. We’re at the tail end of an 18-month consultation project with 1000s of volunteers in our community all over the world. I came from a corporate background, and you know how strategy is made there. You go into the boardroom and come out and say this is how it is going to be. This is not how it works for us: it’s not our movement, it’s the movement of our volunteers.

We are going to continue focusing on making knowledge available to everybody. They told us they want us to go beyond the confines of North America and EU.

Now the challenge is to figure out how to get there.


Q: Silicon Valley has a gender issue; what about Wikipedia? Who is the Wiki community? Who is invited to participate, what articles are challenged or not? How does the leadership of the community meaningfully address these issues going forward?

VC: You bring up a very good point. I must say that we are fairly balanced within the Foundation itself. But I sympathize and agree. People that edit can use whatever identity they want, so we don’t actually know what gender identity our editors are.

E.g. One of our researchers noted differences in men & women’s bio: women’s had more info about their spouse.

The first step is recognizing the problem; From a tech perspective, we are building tools to help reduce bias if possible. But the real solution is not to have bias in the first place. We are doing a lot of work with community engagement to make the experience of becoming an editor more welcoming for women.

The first step is recognizing the problem; our community engagement department is working with people to help them make their first edits.

Q: Things in Wikipedia are footnoted; often with links from the web which are brittle and changeable. Can there be a partnership between Wikipedia and internet archive to keep links?

VC: Yes. We look to build partnerships with everyone.

All our sessions will be streamed live! If you can't make it to Summit, here's the link:

Categories: Technology

Will Data Destroy Democracy?

Fri, 15/09/2017 - 01:03

Lawrence Lessig, Roy L. Furman Professor of Law and Leadership, Harvard Law School and Darren Bolding, CTO, Cambridge Analytica

Moderator: Matthew Prince, Co-Founder & CEO, Cloudflare

Photo by Cloudflare Staff

MP: If there’s one person responsible for the Trump presidency, it seems there is a compelling argument that that might be you.

DB: I very much disagree with that.

MP: How does Cambridge Analytica work, and how did the Trump campaign use it to win the presidency?

DB: we take that data and match it up with lists of voters, and combine that data science to come up with ideas about you who might want to sell a product to, or in teh case of politics, this is this person's’ propensity to vote, this is the candidate they are likely most interested in. WE also do all the digital advertising. By combining data with digital advertising, we have lots of power.

MP: so you don’t want to take credit for having won the election; but the campaign's use of data and targeting was an important factor in the election.

DB: Yes, and what Cambridge did was basically a great turnaround story.

MP: larry you ran a presidential campaign focused on one issue; finance reform. Yet the candidate that spent half as much as HIllary Clinton won. Is finance still the issue or do we need to start thinking about data as the divider.

LL: My slogan was not “fix campaign finance” but “fix democracy first”. This means to fix all the different ways the system denies us a democracy in the sense that we are equal citizens. If you have a congress spending 30-70% of their time raising money, or gerrymandering, that is not a congress concerned with representing its citizens. This is not a system that produces citizenship driven to electing a president.

Our electoral college means that the vote of republicans here in california si worth nothing. These are all the ways in which we have a failed democracy.

I wanted to at least have a voice in the debate to rally around these issues.

What happened is the democratic party changed the rules just as i qualified to be on that stage. But i would suggest that the man who won took the same set of slogans - Drain the Swamp - and ran as full-force as he could and targeted as his opponent a woman who was “sold out” to these interests precisely.

I think it is the fundamental issue.

MP: One of the core tenets of democracy seems like a shared understanding.

If you have 15 different targeted messages, does that corrode the shared understanding?

LL: The truth is, in the half of DB’s world focused on commerce, it’s the best of all possible times. The half of the architecture of communication focused on giving people access to netflix, it’s the best of all possible times. We have to recognize that the internet is the best and worst of all possible times at the same time.

So when you shift to democracy, the same technologies undermine our ability to do democracy the way we did before.
It used to be that the process of winning election was same as building coalition.
It was in front, in plain sight, and when you won, you knew why.

When you have technology like Cambridge Analytica has perfected, the process of winning election is totally separate from governing.

MP: So Darren are you destroying democracy?

DB: The act of democracy is allowing people to choose who their representatives are. That doesn’t imply that everyone has to have the same shared context. I think it’s possibly beneficial that people with disparate points of view / interests they should have those interests addressed.

MP: But you work for a company that says they have a unique tech to do this better. What is it about the tech that makes it so much more better that doesn’t corrode shared understanding, on the other side?

DB: The shared understanding is out there is almost more cultural than anything. I think that having a conversation with you about the regulations that Germany might impose doesn’t permit you from knowing about other aspects of foreign policy with Germany; it’s just a specific thing you care about. Now if the messages are contradictory, that’s when it becomes a problem. But as long as people are maintaining consistent points of view, it’s not wrong to communicate about issues that are important to a specific set of persons.

LL: I wouldn't say that CA produced diffuse culture where there is no shared understanding. But what we don’t recognize enough is how extraordinary 1960s and 70s were for democracy, when everybody was focused on three television shows every night. America basically understood the same stuff.

MP: Former chair of FCC says that maybe this is actually the natural state today; in the 60s and 70s, 3 companies controlled profitable technology and spent more time being neutral and elevating conversation. Is this time period what we should be striving for or is that a reaction to fear of regulation?

LL: I agree this was extraordinary period. It defined how we understand democracy, and that period is gone.

That period is gone. I don’t wnat to return to it. Those three shows were too narrow in a number of ways. My point is that we don't yet have a good model for how to work a democracy where we all live in our own niche worlds of the basic facts.

The architecture of media today is just like the architecture of media in the 19th century.

Most journalism was partisan, all about rallying troops to own version of truth. The difference is that we have no way of knowing what the public thought.

The difference is that we have no way of knowing what the public thought then. We could only know what the politicians thought. We didn’t even have polling.

MP: But back then, you also had a particular understanding of what you were reading; today, FB has an algorithm, there is an editorial voice, and you don’t know what that is. There is some neutrality.

LL: Back then, media drove people to vote in a certain way or not. But today, the views of people about whether we should go to war in Iraq or whether immigrants deserve to be blocked, the views of the people matter directly.

Supposed to have a representative democracy, but we increasingly have a direct democracy composed for a public that doesn’t know anything about issues because we live in niche market bubble worlds that don’t inform us the way our broader world has in the past.

DB: data science is part of the solution. I can use a tool on FB to tell me what percentage of my wall is democrat or republican.

MP: So that’s the argument that we are only just getting used to tech. We will get better at being able to interpret these things and see through them.

DB: These tools also make it easier for smaller groups to get their povs out there into the general market. It costs less to get their message out there. You couldn't do that before because all the power was in a small number of hands. So
Data science available to anybody through FB is actually quite powerful.

I for one thing that if you are accurately representing what the populace is interested in, that is not a bad thing for democracy; that’s a good thing.

If the public is fractured, that’s what the public deserves.

LL: As a kid, as a republican, I was celebrating the internet, I was saying exactly what DB was, but we didn't think enough about the ways it would change the context in which we could have the conversation.

We have never had the ability of someone to speak to 30 million people without an editor standing between. This is new. But now a guy can tweet, and it is seen by 30 million people, and we don’t yet know how to run a democracy with that dynamic.

I hitchhiked across the soviet union when i was young. And was told that in the soviet union they have a better system of free speech than you do in America. We wake up and realize that every newspaper is lying to us; so we have to read 7-8 different newspapers before we understand the truth. This develops a better culture of critical understanding than you have in US.

We have become the soviet union; our parents don’t yet know how to deal with a world in which everyone is lying to you.

But our kids know, and can figure it out based on 7or 8 different feeds.

MP: So is the solution time? Over time, If Cambridge Analytica won the election, what is the next trick? Who will win the next one?

DB: I think personalization of information that will allow individuals to better communicate with people they know. Rather than have one person broadcasting, you’ll have personal relationship.
The dispersion of the central control over the message out to individuals is very powerful. Now instead of Donald Trump talking at you, you have someone else...

MP: It’s a way to trick the kids, then, isn’t it? If your friends are telling you something, that’s how you get the cynics.

LL: it’s certainly a wonderful development; but the problem is that if they’re doing that on the basis of a totally different understanding of the world. Some people think climate changes real; others false . If there’s no common gorond of understanding, that may be good for winning elections, but not for actually governing.

DB: you’re building a virtual community in each “town,” and each community is discussing what is important to them.

MP: I was just talking to an engineer in China, who said that democracy is great but it always drops below its lowest common denominator. How do we fix that if that’s the case?

DB: Our original founders wrestled with that idea; we have to keep trying.


Q: Does Cambridge Analytical make problems like Willy Horton worse or better?

DB: I don’t think it plays that much of a role one way or another. Your context is the ads that played during the Bush campaign?

I think it just makes the message more amplified.

LL: Here we have a real disagreement. You have an assumption that people can’t be inconsistent in how they represent their world view. If we have a technology that perfects ability to elect people, but not through public conversation, that encourages this dramatic

DB: as long as the campaign is consistent and does not change its point of view…

LL: when have we ever seen that?

Q: Where do you draw the line on ethical microtargeting? Are you creating models to target people on the basis of racial messaging?

DB: I don’t think Cambridge pushed any racially charged messages--

MP: do you identify people… do you have a category that is racists?

DB: we had 15 models. It never even came up.

MP: how do we set a framework or a social contract so that Oxford Analytica doesn’t have a racist profile?

LL: Today story that broke about ProPublica and FB basically had an anti-semitic ad category to market to people who hate Jews, and had used algorithms inside of FB to target anti-semites.

Mark Zuckerberg is interested in finding what people want and catering to it; and that’s fine. In 99% of what we care about, that’s what we want. But in democracy, that’s a terrifying possibility.

Q: People make decisions based on knowledge & information they consume. We are now talking about driving mass behavior, which is different from just giving people what they want.

How can data science be used responsibly? What regulations do we need when social networks are driving mass behavior? If it’s not regulation, what other structures do we need?

DB: If you look at the EU, they have the GDPR, and there’s a control over how much information is available. People being aware of how much information they have to give up is going to be somewhat helpful. If you know what information you are giving up, you know what you are able to be targeted on. There will also need to be some sort of code of ethics about what is right and what is wrong to do with data. I am inherently not a fan of regulation. When you have that, entrenched players will create regulatory capture which will stifle innovation.

There should be some sort of element there. “Algorithms will find the worst in us if you let them go nuts.” And this is not all happening on one side of the spectrum.

LL: It’s fun and hopeful to talk about codes of ethics stifling the worst, but if the worst is profitable, the code of ethics will be eaten by the profit.

In one of Steven Bannon’s last interviews, he said, “What we want is the democrats to talk about identity politics every single day until the next election, and we’re going to talk about economic policy and we will kill them.” And you begin to realize that racism is just them playing the democrats. In our world of 2-second attention spans, what do you do to resist that?

All our sessions will be streamed live! If you can't make it to Summit, here's the link:

Categories: Technology

As Seen on TV

Fri, 15/09/2017 - 00:37

Chris Cantwell, Co-Creator and Show Runner, Halt & Catch Fire

Moderator: John Graham-Cumming, CTO, Cloudflare

Photo by Cloudflare Staff

CC: first off, we have very low ratings! The story came from my father who worked in computers in the early 80s in dallas; later in california. The dynamic between those characters was influenced by my dad.

This was largely a story about reverse engineering. The underdog story was interesting: not Bill Gates, not Silicon Valley, but a different story about the computer world.

JGC: and you managed to do 4 seasons

CC: In four seasons we go from ‘83 to ‘94; we cover everything from small networks to building of internet backbone, rise in search and www

JGC: I watched it before I came; it gave me some bad memories because there were AOL disks

CC: We have an incredible prop team. Some comes from RI computer museum; i have to ask our prop master, he might have manufactured them from images online.

JGC: This is a show about tech but also about money; these people are trying to build companies. The same people trying again and again. Is that a metaphor for recycling something?

CC: Yes, i think so; a big theme is reinvention, on a personal level and about what they’re working on.
Reinvention as a theme that is championed by Silicon Valley is a really universal concept.

We learned from our research & tech advisor that there are ideas that float into ether and are diffuse and shared, and at some point one person catches something and the idea takes off. That idea “wins”, but really it’s a chaotic mess of people playing with possibilities.

JGC: What strikes me is how the characters are trying to build something and they don't’ know what they’re doing. Then one of the characters talk about building an index for the web. In some ways that’s the nature of creation; you don’t know what direction you’re going. There’s a link-up with art there.

CC: Absolutely, in season 4 a character has been in the basement since 1990; we realized that it took a while for web to take off. We portrayed guy in a basement collecting post-its handwriting URLs. It was a website every few days; into a website a second. So he has collected them, and we have a visual representation of his whiteboard.
He gives them to his friends, who gives them to someone else; she builds her own website that links to each site;

Organically people discover that site, and then they have a proto-viral site on their hands.
It didn’t start that way.
It was like the yellow pages, but they don’t really exist anymore.

JGC:I was also struck me that what they're doing in the ‘83 clip is really quite technical. It struck me that tech has gotten much more complicated but also much simpler.

CC: Yes, because of the rise of computing industry we’ve also experienced accessibility of tech.
That you can go to Best Buy right now and buy a cinematic camera that you once had to rent and go to film school for for years and knot knwo what it looked like until the filml was dveloped.

Accessibility is a great power and virtue of the industry. We tracked this over the course of the season

In the first season they feel like young upstarts. By season 4 they are struggling ot keep up with what’s going on.

It’s amazing to see this happening even today, given the democratization of so many things.

JGC: The characters are always optimistic---this isn’t Black Mirror or Westworld. What happened to that optimism?

CC: on a character level, we follow people who are always focused on the next thing.

We’re placing our happiness on what’s to come, and there is a kind of grasping that the characters are constantly engaged in, born of a real belief in what they are doing. And yet they are never satisfied with where they are. Over the course of the journey of the series, these five characters realize that about their lives and wonder whether they can actually step off the wheel.

On the tech side, when people started pulling apart / indexing web pages it was done for fun. First experiences on internet were just about exploring, and there was a joy in that.

What is unspoken on the show is a tremendous ambivalence that couldn’t happen now.

“We might be on a train that we are no longer piloting.”

Tech is moving so fast that we can’t adapt as quickly as the things we are building.

JGC: Is it also that we can’t imagine the consequences of what we are doing?

CC: i think so. There isn’t much foresight; the characters on our show don’t have the benefit of hindsight.

The characters in the show talk a lot about the future. Future is a heavy word. People sometimes say: “There’s no such thing as the future; it’s just people trying to sell you a crappy version of the present”. We can never predict it.

JGC: if you look at ‘83, they have a physical machine, and by ‘94, it’s all software. So a lot of what you’re trying to portray is really quite boring; how do you dramatize sitting in front of the computer?

CC: Again, low ratings! It’s interesting, since the pilot i love it when characters have something to hold.

Our pilot director was a filmmaker named Juan Campinelli [?]; we turned on an IBM for him, and he turned to us and said; “that’s what it does?” It was so boring for him. Now, we have screens that are blank and actors typing and building websites that are inert pages---that’s even less interesting.

JGC: Is this some sort of terrifying metaphor? The machine doesn’t know what we are typing?

CC: We tried to turn one machine on, and it actually caught on fire.

JGC: How do you research this show?

CC: Carl has been an incredible resource on our show; he’s a venture capitalist, has done everything under the sun; he does this because he loves it.

We liberal arts guys needed someone like Carl to help us understand what was going on

Everything we have tried to put on screen we have tried to get right, out of respect for historical telling. But we had to go from perfectly right to defensible, because sometimes even our sources began to disagree with each other.

I just learned that we accidentally used 2013 reissue of Doom: people got pissed off. At a certain point, we’re doing the best we can. Hopefully the human drama is carrying you through.

JGC: How do you get inside characters’ heads?

CC: the actors do their homework to try to understand as much as possible; but we try to convey that these characters are masters of their field; the viewers have to trust that they know what they’re talking about.

It’s really about character stories. “Technology can be a delicious metaphor for so many things.” E.g. Automated vs human touch.

You can pit the characters that have so much animus toward each other against each other.

If we get sidetracked in the writer’s room talking about print drivers, we gotta bring it back to the human drama.


Q: Being a grey beard who has worked in Silicon Valley for 40 years, I noticed it was mainly engineers running things at first; then transition to business types in the 1990s. Do you agree with that phenomenon, and will it affect your future storylines?

CC: Season 4 is our last; there is push and pull between those who build and understand it, and those who sell it.

When you have someone who is just “the suit” / the ideas guy, there’s a really interesting struggle that we try to dramatize throughout.

As the tech gets more ephemeral and seems like magic, business guys may have gained upper hand. You see the venture capitalists holding all the chips and the engineers fewer and farther between in the later episodes.

Q: I'm assuming you've seen the movie Hackers, with a visualization of traveling through the network. Have you thought about other ways of visualizing the activity of sitting down at the computer to do this work?

CC: We have. It’s tricky. We once tried to do a sequence where 2 characters moving through digital community they created online, but then it looked bad.

Sometimes we can visualize, and sometimes we have to go with what’s real, and I think sometimes a viewer can respond to the latter more.

Q: What about the notion of origin story? Do you think there are 4 seasons of drama buried behind every million dollar company?

CC: the way we determine that is by meeting with the people themselves on the ground; that’s where we’ve gotten the best stories. Carl has amazing stories. So i’m sure the same could be said of Cloudflare.

All our sessions will be streamed live! If you can't make it to Summit, here's the link:

Categories: Technology

Private Companies, Public Squares

Fri, 15/09/2017 - 00:18

Daphne Keller, Director, Stanford Center for Internet & Society, and Lee Rowland, Senior Staff Attorney, ACLU Speech, Privacy & Technology Project

Moderator: Matthew Prince, Co-Founder & CEO, Cloudflare

Photo by Cloudflare Staff

MP: Technology and law seem like they are colliding more and more. Tech companies are being asked to regulate content. For a largely non-lawyer audience, give us some foundations about basic rules when you have content on your network?

LR: Communications 2.0 makes the 1st amendment almost quaint. The vast majority of speech that we exchange happens online. When it is hosted by private companies, the 1st amendment doesn’t constrain it. So this is a space governed by norms and individual choices of people like Matthew. In the wake of Cloudflare's decision to take down the Daily Stormer, Matthew penned a piece saying it’s scary that we have this power, and I exercised it. We have a completely unaccountable private medium of communication.

MP: There are shields for companies for this; What is intermediary liability and why is this a position at Google/Stanford?

DK: No one knows what it means; it’s a set of laws that tell platforms when they have to take down user speech because that speech is illegal. In the US, platforms don't have to take anything down; but outside of the US, the rule is that when platforms discover something they have to take it down or face liability themselves. The problem is that anytime someone alleges that something is illegal, it can be taken down. So the rules about when platform should to do this are very consequential for practical free speech rights of users on the internet.

LR: We can’t undervalue how much these rules have created today’s online ecosystem: Yelp would not exist without intermediary liability. Any content provider platform exists because of these laws passed in late 90s.

MP: In both the US and the EU, laws are coming under threat; we tend to focus on US, but Germany’s top priority in the last G7 meeting was limiting intermediary liability.

LR: There’s an opportunity here for companies with ties to US to make sure that we don’t allow countries with less protected speech regimes to ratchet to the lowest common denominator. Multinational pressures risk going to that lowest common denominator. I think companies like Cloudflare have a duty to uphold the values that reflect our first amendment landscape. Do we want a world where Nazis cannot have a website? It’s not a comfortable thing to talk about; but I want the ability to see and find speech that reflects human beliefs, because that’s how we know it is out there. Enforcing that kind of purity only hides beliefs it does not change them. Companies that are part of web infrastructure have fundamental responsibility to provide neutral platform. We are providing a neutral platform and it's other people’s job to see that speech and counter it.

DK: There’s also an ugly dynamic between governments and major platforms; private companies are taking over government functions, which is weird because they are not subject to government constraints. This creates an opportunity where private companies can do things that government can't but maybe want to do e.g. collecting user data.

In Europe, the commission reached agreement with 4 big platforms on the EU hate speech code of conduct: The agreement was that they would voluntarily take down hate speech as described in the agreement, which is not the same as hate speech as defined in the law. They are voluntarily agreeing with the government to take down hateful speech. Many Americans find this odd.

MP: Is this a fight that we can win? Views on free expression ideals have changed since 4 years ago; “don’t be evil” doesn’t translate well in German; What argument persuades rest of world that we should be neutral platform?

LR: These borders have real impacts on speech; but for American consumers and companies giving internet access to American Internet users, we do have the ability to help people understand not to race to moral panics. No one is out there picketing AT&T because Richard Spencer has a cell phone account with them.

MP: We have had a tradition of newspapers having editorial perspective, conservative or liberal.
Is Facebook like the modern newspaper? Or are they like the printing press? What is the analogy that makes sense?

DK: In Europe, people are inclined to say that Facebook needs to admit that it is a media company. The difference between Facebook and a media company is that the media company hand-selected everything that it published, whereas Facebook is an open platform

MP: But if you put up a link to Daily Stormer on Facebook with support for the site, it was taken down; if you were critical of the organization, however, it was kept up.That sounds like a media company.

DK: They take down a lot. That’s not the same as saying they could be legally accountable for everything that is transmitted on their platform.

LR: I do think that people on a gut level hold newspapers accountable for their world view.
Facebook already exists as a content review company; they’re a platform but they've always had algorithms and curation. Each of these is a choice that affects what you hear/see.

MP: “it’s the algorithm it’s neutral”

LR: That has always struck me as horseshit...

MP: Does it surprise you there’s not a Fox News search engine?

LR: This has been constant conversation in the net neutrality debate. Internet service providers have said: we don’t discriminate: but we want the right to not take you to a certain website.

Can you have a bespoke ISP? The Disney ISP that makes for damn sure you don’t see porn? Maybe, no one has done it. People’s willingness to replicate their own bubble. There seems to be enough of a demand of that.

DK: The fact that there isn’t a Fox News search engine is actually important.

People who are saying, Facebook should not be able to take over my political speech are also noting that there is no place else to go: friends, etc. are all on Facebook. It matters when there’s somewhere else to go. If there’s only one place to go, it’s easier to imagine there being government regulations on them.

MP: The question is: is there any scale at which you think maybe it’s not the right time… Is there a time when that’s the way to think of your status? Steve Bannon is proposing that giant companies should be regulated as utilities. Is there a time when that’s the right way that this should be thought of? If you are Facebook and you are the only place to reach this audience, does that mean that you have another set of obligations?

DK: I don't think that works. This may apply to your business, but for the service that Facebook offers, the service creates a community that people want to come to because it is not full of hate speech and bullying. And without that kind of curation, they would no longer have the value proposition for their users.

MP: That suggests that there are different rules depending on where you are in the stack. What should a registrar do vs. DNS vs. browser provider? What is the framework you’d use to determine where internet is or is not neutral vs. curated?

LR: I want to admit that as a 1st amendment advocate, there are interests on the other side. I may think it is a dangerous precedent, but you have the right to decide who to keep and kick off.

For us, as ACLU, we focus on two things:
Government subsidies and the kind of centrality and importance of that service.

Are you a neutral … or common carrier? Are you actively curating content?

Generally there isn’t a model where you are distinguishing based on content; this isn’t the most profitable path to success.

MP: ACLU has been force for free speech in US; who is fighting for free and open web outside of this country?

DK: There are organizations around the world that work on this. Some of the best efforts are in Brazil, Argentina, India; much smaller in EU. We're paying attention to these differences.

It's important for smaller companies, for journalistic interests to show up and let them know.

MP: What are the arguments you’ve found that are persuasive in these conversations about regulation? What works?

DK: I think people get it when you say you are sacrificing sovereignty by standing back and asking an American company to decide this for you. In some cases, the economic argument is also persuasive. Outside US, American lawyers yelling about 1st amendment do not get much respect. But there are other important points you can make

LR: Domestically, if we’re talking about convincing legislators to think about roles, there’s the Communications Decency Act. At the time in the late 90s when it was passed it was overwhelmingly bipartisan because conservatives and republicans knew Silicon Valley is liberal.

In the last 15 years, there has been moral panic about human trafficking online. Some of the unholy alliances come when women’s advocates on left and libertarians on right agree with each other. It’s the First time congress has amended SESTA since late 90s.

The only thing that’s ever effective besides a lawsuit is reminding people that they might be the goose or the gander next time. You might not always be on the right side.

Facebook agreed to the hate speech rules. So many human rights activists voices have been silenced according to that agreement. The Intercept article on human rights activists that have been silenced under over censoring.

MP: What are 1 or 2 things that you are worried about, that people aren’t thinking enough about right now?

DK: There is tremendous pressure to build technical filters to find and suppress content and widespread belief that this tech can be built to identify terrorist speech. Companies are under pressure and end up agreeing; the result is that videos documenting atrocities in Syria re-being taken down. So the push for mechanized content removal is very dangerous.

LR: I totally agree, and I also highlight the importance of due process. If someone censors our speech we can say, hey wait a minute. But you don't have that option with FB.
Hand in hand, algorithmic ratcheting combined with lack of due process is a problem.


Q: Besides basic issue about media making judgments about censorship, there are two additional dangers: 1) what makes companies like Cloudflare more or less susceptible to pressure from governments; 2) the danger of companies colluding on these things.

DK: On vulnerability, What makes you vulnerable to pressure form a government: people on ground that can be arrested; assets that can be seized; wanting to have a market in that company, or already having a market that you are afraid to lose.In terms of collusion, I worry about monoculture that systematically discriminates against speech of particular people.

Companies that don't want to be regulated decide to self-regulate.

Q: One of the challenges with open internet is its openness; what about dark web that is encrypted? Is that potentially an answer, where regulating free speech becomes difficult because we don’t know where it comes from.

LR: I think it addresses free speech values problem; but for average internet user, probably will create less attractive ecosystem. If you want anonymity that’s great, but is it an actual useful web? If you want useful web that is free, effective, and accessible, answer is probably no.

All our sessions will be streamed live! If you can't make it to Summit, here's the link:

Categories: Technology

Betting on Blockchain

Thu, 14/09/2017 - 23:20

Juan Benet, Founder, Protocol Labs, and Jill Carlson, GM, Tezos Foundation

Moderator: Jen Taylor, Head of Product, Cloudflare

Photo by Cloudflare Staff

JT: Tell us about what BlockChain is

JC: Going back to 2008, advent of blockchain came with bitcoin white paper.

The word Blockchain wasn’t mentioned at that point, but that was the advent of this tech.

What it solved was niche problem called double spend problem. Creation of digital cash.

What you see in a bank account isn’t digital cash. The problem in cryptography was how to create digital cash that doesn't rely on 3rd party intermediary. This is what Bitcoin created.

JB: Blockchain packs in lots of stuff: useful as brand. Like internet/web in early 90s, the meaning is fuzzy.

Properties that all of these apps have in common:

Academic definition: A blockchain is an indelible chain of blocks; once you insert information into one of them it remains.

Marketing definition: many applications have been developed over last few years, all have to do with public verifiability. Reliance on cryptographic methods to achieve goals on clearing payments and the ability to check and verify.

Across the board, removing 3rd parties from equation. Establishing publicly verifiable state of structures. Trust protocol removes trust needed from individual parties.

It points to a return to what people called for in the early 2000s. Decentralization of the power structures that control the internet.

Removing power from entrenched places.

JT: you’re both doing great work with organizations looking at moving blockchains forward. What is currently happening with this tech?

JC: I work with Tezos; a blockchain protocol and platform used to build decentralized apps. Hearkens to a concept of a hard fork of a blockchain

Hard fork: 2 different assets of bitcoin, assets and cash

Comes back to idea of decentralization. Decentralization offers many things; one problem it raises is how you push upgrades to the tech. Generally there is one centralized party. With blockchain it’s different. Lots of politicized infighting among communities and users of tech; Tesos seeks to solve this infighting and enable coordination.

If everyone here owned one Tesos token, everyone would have one vote as to how the roadmap proceeds.

Also seek to innovate on formal verification of core base of protocol to make applications more easy and accessible.

This comes back to the language we’ve chosen for the protocol and implementations on top of that. The tech will underpin trillions of dollars worth of industry, and it should be built with that in mind.

JB: we works on IPFS and …
IPFS is a decentralized hypermedia protocol.
Think of the web, and if the web itself had no notion of locations or sites but was more decentralized than now; content would not be addressed by where it is and who owns it, but instead by what the information is itself. The same information would have the same address. This isn’t how the webo works now. Today that’s not the case. We want to rethink the stack for how the web works: content addressing rather than location addressing. Peer-to-peers structure.

Think about how easy is it for content to become hypercentrialized and censored;

Also efficiency: channels of low bandwidth and so on.

If we can move entire sections of the web to a remote location and serve them at protocol level, take what we’ve learned from CDNs and build into the protocols themselves.

Finally, it’s a way of thinking if you have decentralized way of creating protocols that organize work in a public network, can you organize a system to store data for all of it. A utopian decentralized market where storage is proper commodity? Allowing ISPs to participate in cloud storage.

Today we have a hypercentralized storage system as well.

JT: The power of decentralization could really change the world. What are some of the other benefits or uses that we could apply this tech to?

You get to work with the community in such a rich way; what other use cases?

JC: Inspiration from investment bank. Started off as a bond trader.

The real innovation is just not about decentralization like bitcoin but also reshaping entire market structures.

Reshaping entire market structures that today depend on rent-seeking middlemen. Logical conclusion of this is: redefining what it means to own something in digital form.

Today we don’t really own anything that is in digital form. BoA database represents my ownership. So i get excited thinking about how completely different market structures will look in a couple of years.

JB: At its core, this has to do with establishing decentralized computing platform where you can run programs and encode business logic and where participants can’t overturn results, and there is no litigation over the events that take place. What happens to law when you can express legal agreements in a digital context.

Transactions are easier if you don’t have to draft agreements and think about them in depth every time.

The major innovation with blockchain is that law and finance were right away ripe for changes, in terms of investments and ownership.

You have the first real wave of smart contracts, finance and law are immediately being changed.

Potential is massive: you can change pretty much everything, how we reason about markets and providing services and utilities. This is the first public utility that is completely international, governing themselves.

You can run all kinds of services: cloud storage, cloud computing changed in fundamental ways.

That said, it will take a while. UX is still atrocious. Quality of platform is bad relative to modern standards. Considering migrating an application into a different context is almost a non-starter. The tech has to catch up with banking to enable developers to change how they maintain applications.

You’ll have developers able to create something like Twitter, put into into the network, and never having to worry about maintaining it anymore, because participants will. Completely different way of approaching development.

Now is the right time to get involved to help develop.

When you have a 100-line code of contract, you want to leverage all you can to make sure you get the right answer.

JC: Precisely because there are no 3rd party intermediaries to call about reversing an action.


Q: I am an investment banker, but i don’t understand what mining is.

JB: When you think about decentralized consensus protocol, where a bunch of parties are proposing values for the head of the chain, and they have to agree upon what that value is.
Mining is a way that lots of work/resource expenditure have to exert in order to propose a vote on value.
You have a whole bunch of people with computers hooked up trying to give one value weight and declare a winner.
It’s like a voting system where you use resource expenditure...

JC: proof of stake algorithm vs. proof of work system

On any blockchain network you need a validator who verifies certain things about transactions and then is broadcasting that badge or block to the network. The validator gets elected based on how much computational power they are putting into the system.

Next generation of systems will use proof of stake, where election process relies on how many tokens you have: creates new incentive structure

JB: we found a way to resource expenditure with a valuable side effect: mining is useless otherwise. It’s useful insofar as it lends weight to your proposed value, but no value outside of your company.

We found a way to use valuable storage of files and computational work which shows that resource expenditure is actually proving network that you have stored files. We think proof of stake is valuable area of research in the future.

Governance of these systems will evolve dramatically over the next years.

Q: I’m curious about how people are thinking about preventing recentralizing things, e.g. smart contracts. Like in agreeing on price of wheat over a given day, Everyone has to agree on what the price of wheat is on a given day, and certain nodes have more power. Secondly, how are you thinking about preventing recentralization as you are going through these processes of decentralization.

JB: There are many things happening that might cause recentralization; Oracle solution is solid if you have verifiability and if you know they can’t charge exorbitant fees.
Approach is to decentralize in pieces. A good enough solution for now and then go back and decentralize more along the way.

We think about it in terms of storage providers or distribution providers: how to carefully structure things to get as much value as possible.

JC: running joke in crypto space is that crypto-currency has created far more 3rd parties than it has destroyed.

A new protocol has to be very specific about the problem it is trying to solve. If i am in Venezuela using crypto-currency, the trust problem I’m solving there is different from the trust problem in file storage. One protocol won't solve every trust problem.

There are different trust problems and there is no “one protocol to rule them all”

All our sessions will be streamed live! If you can't make it to Summit, here's the link:

Categories: Technology

The New Breed of Patent Trolls

Thu, 14/09/2017 - 22:38

Lee Cheng, President & Co-CLO, Symmetry IP LLC, and Vera Ranieri, Staff Attorney, Electronic Frontier Foundation

Moderator: Doug Kramer, General Counsel, Cloudflare

Photo by Cloudflare Staff

DK: Patent--IP issues and challenges are accelerating important supreme court cases. there’s also a flurry of legislative activity about patents. Good idea to talk about this topic: where is this going? How to push world in virtuous direction?

DK: current state of affairs. Vera: at the core is the patent itself, which is issued by and often adjudged by the patent office… is this where the problem lies?

VR: I like to blame everyone. How does someone get a patent in the first place? Someone comes up with an invention, patent attorney, documents it with opaque language, and files. The examiner then interprets the patent and searches for prior art, and says “I think this is what the patent owner is trying to claim.”

In the software space, it’s especially difficult. A lot of where inventing happens in software is right here, in businesses. People have a problem and find a solution by developing software. They don’t patent and publish.

Patent office tends to focus on patents.

DK: Talk about the incentive structure for those.

VR: Patent examiners are part of a union and their deal includes doing work and get credit for issuing patents. There’s no way to reject a patent because the applicant can come back over and over again. So most patent officers will issue the patent and let people deal with it later.

DK: Is there anything in this system that could change the dynamic?

VR: I’d give patent examiners more time, which they lack. There is also currently incentives at the patent office to not search the Internet. Patents don't become public until 18 months after a patent application is filed.

DK: So then how do enforcement proceedings work? Is that where the fault should lie?

LC: Patent trolling is a manifestation of litigation abuse. If you look at the problem historically, it’s far too easy in america to sue someone and almost impossible to hold someone accountable. It’s costly to defend against assertions of patent abuse.

One of the reasons we were able to embark on the strategy we did is that there were already signs at higher levels of the judiciary that this was a problem. So there have been a lot of cases over the years that have been rationalizing patent law. It’s incredibly slow and easy to find loopholes. We still have trolling today; the best we can hope for is to drive it to a sustainable nuisance level.

DK: So this is a moving target; do you have any examples of new and expanded challenges of creative assertion of patents?

LC: You can think of trolling as part of the litigation industry. There’s so much money at stake. So, it’s not surprising that you have creative human beings on the complainant side. They’re protecting their livelihoods and will evolve their tactics. We see developments; recently there has been news about [a medical company] selling to an indian tribe and making an argument that the tribe is protected against litigation. It’s a cat and mouse game.

DK: IP is property; the question becomes how do you allocate or set up an incentive structure that leads to optimal allocation for societal good? At the administrative level, how do you set up patent application or process in a way that could lead to optimal allocation?

VR: Don’t be confused. I don't think we should be doing a thorough job in the patent office. The vast amount of patents have no economic value and only certain patents case the problem. If a patent becomes economically important, maybe charge those owners more money to weed out others that aren’t economically important. Prove its worth. We should say to patent owners: “If you want to keep this patent, prove it by paying for it.” Right now a lot of the costs are on the people who have ostensibly infringed.

DK: Reallocation of costs or raise review to prove value. Is there a reasonable way to get companies in the game before they get sued?

VR: They way the system is setup right now is that if competitors try to participate, they will hurt themselves. The public should be a patent office customer; I’d like to see lower costs in challenging patents. Patent owners are pushing back because it’s taking away some of their leverage.

DK: The country issues patents and congress later finds a flaw in that, and sets up their own process. But now there is a challenge about whether or not they can do that.

Do you sympathize with the argument that these are important rights?

VR: I’d have more sympathy if there was more rigorous evaluation at the outset. What I see is not that rigorous of an evaluation by allowing someone to say “I have a very strong property” right after the office only spent 19 hours looking at it. There’s an imbalance right now.

LC: It’s already a high bar to file an IPR. Better than going through court, I see the solution as economic: Achieving end of patent application which is a benefit to society.

DK: Is there a way that you could define the genuine attempt to practice a patent?

LC: I’d love to see compromise: If something’s not practiced, you get your filing fee back.
Patents are monopolies and monopolies are bad for society. We shouldn’t have an arm of the government handing them out like candy. You can reward the garage inventors, but they don’t deserve a gigantic windfall if they don't bring the economic advantages to society.

DK: Let’s talk about wins in other direction: People who win litigation have their own patents. How do you think about achieving balance between company innovation & value as an individual with own IP?

LC: I’m a purist and idealist; I think that for the people who start companies, none of those successful companies became successful initially. They made products and services that added tremendous value to society and to everyone’s lives. They eventually developed robust portfolios. I would hope that their founders keep in mind the importance of preserving the ability to start companies. I want them to maintain that sense of idealism about what patents represent.

DK: The paradigm is the pharmaceutical industry. In twenty years and one day the price of a pill goes to pennies. It’s hard to imagine a corollary in the tech world where you just wait 20 years and a day and then you can use all of Facebook’s patents.

What is one change you could make that would move the needle toward positive changes in patents?

VR: I was thinking if I lived in a perfect world, everyone would get free lawyers. The rationale is that litigation is really expensive; what I see is that people aren’t winning or losing patent suits not because of merits or demerits but because of the cost of their lawyers. When you know you’re in the right but your lawyer tells you it’ll cost $200,000 and they can’t promise anything, it’s a wise business decision to just pay the $50,000 to the patent office instead.

LC: One of my wishes was granted: “TC Hartland” was decided. We have impacted 40-50% abuse of patent cases.

DK: Before TC Hartland, you could sue a company anywhere they were selling their product, and a lot of these cases were centered in East Texas. Was this about home field advantage?

LC: It wasn’t even a home field advantage. Judges wanted to make sure these communities were economically stimulated and these cases could drive a lot of revenue. So East Texas ended up becoming a place where about 45% of all patent cases in the US were being filed.

One decision addresses potential huge volume of frivolous patent litigation.

LC: I would also reform damages.


Q: I have two questions: 1) There hasn’t been much discussion about post-grant review and I’d love to hear your thoughts. 2) There was discussion of the Oil States case; I’d like your thoughts on that.

VR: Post-grant review can be very helpful in one or two areas, but not really in the software space. There are too many to search through.

Post-grant review is a procedure for newly issued patents where a third-party can come in and say it shouldn't have been issued. But you have to do it pretty quickly. Unless you are a large company with legal resources, I don't see much of an impact in software.

DK: if you're in software, you don’t know how the patent is going to get applied.

LC: it doesn't bother me at all…

Q: one of the problems with post-grant review is that if youlose, you increase your damages.
IPRs can consider 101 grounds as prior art
Standard of proof is much higher on art that patent office didnt consider
Patent owner has long time to write infringement report; tremendous disadvantae to… with 3-week turnaround

DK: part of the thread we’ve been running through is the idea that the playing field is tilgin

We’re not really dealing with one rule here, but a series of different rules…

VR: On 101 issue: Supreme Court recently reaffirmed that you can't get patent on abstract idea. This is important to software patents because many are abstract ideas, not any technical explanation of how to implement an idea.
To have a way to decide that is important.

LC: hopefully everyone in the audience will take a stand against the injustice of the patent system

All our sessions will be streamed live! If you can't make it to Summit, here's the link:

Categories: Technology

If I Knew Then What I Know Now: Tales from the Early Internet

Thu, 14/09/2017 - 22:16

Paul Mockapetris, Inventor, DNS, and David Conrad, CTO, ICANN

Moderator: Matthew Prince, Co-Founder & CEO, Cloudflare

Photo by Cloudflare Staff

MP: You guys wrote all this stuff; why is the internet so broken?

PM: People complain about security flaws, but there is no security in original design of dns. I think of it that we haven’t had the right investment in rebuilding the infrastructure.

Original stuff was only good for 10 years, but we’ve been using it for 30.

DC: The fact that we were able to get Packard from one machine to another in the early days was astonishing in itself.

MP: So what are you worried about in terms of Internet infrastructure that we aren’t even thinking about?

PM: i’m worried about the fact that a lot of places like the ITF are very incremental in their thinking, and that people aren’t willing to take the next big jump. E.g. hesitancy to adopt blockchain

Being able to experiment and try new stuff is important.

The idea that you can't change anything because it will affect the security and stability of the internet. we need to weigh benefits and risks or we will eventually die of old age.

DC: Typically, security of routing system. There are people out there who might route stuff inappropriately. I’m not confident about some solutions that have been proposed.The complexity of the system is starting to bite us pretty hard.

Also, more so, I worry about ability of bad guys to redirect cannons at any service or target. Way too easy to overwhelm anything in the infrastructure.

MP: So if a lot of this is about being stuck in the incremental world and not making inventions, is it getting worse or better? Is there any hope?

PM: Some of it is more basic technology. Stevie Wonder said that when you believe in things you don’t understand, then you suffer. We need to think about routing as a computational problem with bilateral or multilateral agreements. And people can control their destiny a little bit more.

It’s also a competitive marketplace.

Think about using tech so people can update the agreements that they have

MP: But how do you move things forward, given incrementalism? What is path to actually replace dns with blockchain? Do we need to move away from bottom-up internet governance?

PM: I don’t know exactly how you do it.. It's the case that organizations have gotten big enough that they can make their own custom equipment. The software has always defined the network. So how can you have interfaces to allow collaboration with as much control and reliability as you’d like?

I think the next frontier is to think about ways to do distributed synchronization contracts. Coordinating addresses and names by your own tools. We need more investment in the capabilities of the infrastructure.

DC: I agree; we have reached stage of semi-equilibrium w standards, resulting in ossification of underlying infrastructure. This also permits thinking outside box. After awhile, people will get tired of the proprietary stuff and start another round of standardization. It’s a cycle.
E.g. DNS over HTTP

There have been increasing calls for standardization corporations to formulate a standard way of doing these things.

The other problem is that you start getting vested interests who don’t want progress; they like the niche that they’ve developed for themselves. And they like revenue streams.
The cycle of disruption and equilibrium will continue. The ITF is struggling to understand how it will remain relevant moving forward in a way that allows for disruptive technologies t come in and change the underlying game.

MP: Related to internet governance debate, what do you saw to Ted Cruz when he says US gave up control of internet? Does he have a point?

DC: NO. Fundamentally, internet is network of networks. You can get into questions at a point about what happens when an app reaches critical mass; does it have regularity implications. By and large, internet has no mechanisms of control.

MP: it seemed like the internet was working okay before, why did the US stroke the provision that says we can go in and potentially veto what ICAN was doing
What was rationale?

DC: Part of it was misunderstanding. The primary role of US govt was to make sure ICANN didn’t do something stupid. And after 12 years of not having anything stupid happen, they realized that not doing anything to the root zone was causing a lot of political problems internationally. So they decided to let the contract expire.

MP: There was/is real risk that the internet gets governed by a much more political organization that would transform the way the internet is governed to a top-down organization. Unlike what Cruz says, the move by the last administration to say they wouldn't be able to control the internet anymore was a brilliant political move.

DC: Alternative to Cruz’s approach is fragmented internet, with national networks connected with gateways.
And that has implications with regards to the ability of internet organizations to reach markets they would like to reach

MP: Can we avoid that? Can we have a non-fragmented internet? I’m less sure that this is the case today vs. 4 years ago.

PM: the internet has cracks in it today. The only real issue is how fragmented is it gonna get. When i was visited china once, at the local hotel you had open internet, but only for westerners who happens to be visiting. It is going to fragment, political people will press their agenda.

I wish i could make a deal with the US government where i could say, okay you can have my data but you should be protecting me from other people. Negotiations are going to continue.

MP: is there something technically that you wish you had done in the design that would have better resisted that fragmenting?

PM: when I was at ICAN, people were saying that the US govt should not be control of all of this; and that was a great attitude, but the US govt can be persuasive. There will be different shades. You can’t expect people to think that the internet isn’t part of the regular world. It is. So regular rules will be applied to it.

MP: what’s changed? Do you feel less idealistic and optimistic, or have you always been pessimistic?

PM: My message is: should i look at telegram or signal? I can’t do anything about the US govt, but i want to protect my privacy from commercial organizations. To me it’s more that we have ot think about being more aggressive about thinking about protecting our privacy ourselves. But we should be asking the govt to protect us and not just the storehouse holding all our conversations.

Until we make security user-friendly, we won’t use it as much, and then it won’t protect us.

DC: the technology for filtering, for blocking moves with other technologies. And it’s getting better over time. I'm not particularly optimistic but i think that ultimately the network derives value from the number of people who connect to it. Once you filter or block significant parts of internet, it begins to lose value.

There is an effort to try to protect the data that is being transferred. Ther ewill be man-in-the-middle taps and data taps, but ultimately the value that the internet brings will provide a way to ensure the infrastructure continues to operate. There will be islands and gateways, but when GDP start depending on connectivity, that sends a signal to govts.

MP: a lot of the world that looked to the US for internet leadership, they see where growth is coming from, and that is China.

DC: China has imposed strict control of info, but look at europe and india which are more open:

But you also look at Europe and India and other places moving toward a more open regime focused on privacy. It is unfortunate that the US is stepping back from the leading role that they had.

PM: this whole business about filtering being harmful is not where we are today. Is there anyone in the audience who doesn't want to use anti-spam on email?

MP: but that’s your decision, not the govt’s.

PM” reputation filtering is my first line of defense. The fact that filtering is good tech doesn’t mean it can’t be used for bad or good. We should be worried about sharpening that up rather than worrying about censorship.

One question i always want to ask is: is email routing more secure than PGP? If you connect me to a billion more people, i don’t have time to talk to them.

MP: but if there’s the opportunity to talk to one, isnt; there some value?

PM: being selective about who you connect to… why would you talk to some unknown person if you wouldn’t go to a restaurant without looking at reviews?


Q: you talked about fragmentation; when will great firewall of china have adverse effect on chinese government? When will cracks start to reappear in that?

DC: Depending on who you talk to, the great firewall of china is either the best thing that god has created or it is already impacting the ability of chinese companies to work in a global market.

Because there is so much potential for growth in china, control is winning. But as soon as chinese organizations look for larger markets, you’ll start to see changes in the way that firewall is operating

MP: When we travel over there, the lack of ability to run a google search and find code that you need, that is something that engineers on the ground in China complain about today. If chinese companies were to stop thinking about their market being only inside china. Think snapchat. The country will start to look more outward.

PM: The jury is still out. Darwin isn’t necessarily in favor of liberalism. Be comforted by specific examples like market access. But there is still reason to be scared.

MP: Ok, final question - Bitcoin @ $4,500 or IPv4 addresses @ $12.00 - what is better investment?

DC: IPv4

All our sessions will be streamed live! If you can't make it to Summit, here's the link:

Categories: Technology

Making the World Better by Breaking Things

Thu, 14/09/2017 - 21:36

Ben Sadeghipour, Technical Account Manager, HackerOne, and Katie Moussouris, Founder & CEO, Luta Security

Moderator: John Graham-Cumming, CTO, Cloudflare

Photo by Cloudflare Staff

JGC: We’re going to talk about hacking

Katie Moussouris helps people how to work around security vulnerabilities.

Ben Sadeghipour is a technical account manager at HackerOne, and a hacker at night

JGC: Ben, you say you’re a hacker by night. Tell us about this.

BS: It depends who you ask: if they encourage it; or, we do it for a good reason. “Ethical hacker” - we do it for a good reason. Hacking can be illegal if you’re hacking without permission; but that’s not what we do.

JGC: You stay up all night

BS: I lock myself in the basement

JGC: Tell us about your company.

KM: I was invited to brief Pentagon when I worked at Microsoft; The pentagon was interested in the implementation of this idea in a large corporation like Microsoft.
“Hacking the pentagon” The adoption of Bug Bounty has been slow. We were interested in working with a very large company like Microsoft. There was interest in implementing ideas from private sector at Pentagon. I helped the internal team at the Pentagon ask a bunch of questions. I told them “You’re already receiving a free pen test. You’re just not receiving the report.”

Trying to engage with the hacker community and provide a legal avenue to report to the Department of Defense.

It was important for largest military organization in the world to admit that it didn’t identify all the bugs.

BS: Two years ago, no one would admit they hacked the government. Now it’s an important conversation to have.

JGC: Has the navy done it yet?

BS: That’s something we don’t know yet.

JGC: What you’re doing is not illegal, but there are some laws. What is the grey area? How are you not breaking the law?

BS: You’re okay as long as you’re following the policies.

JGC: Is this typical?

KM: When you get to potential impact, your well-meaning hacker will start to create some conflict.
They’ll say: describe the vulnerability steps you’re reproducing and the potential impact. We have opportunities to “clarify” the scoping rules.
Nation-states are different than private companies.

You’re giving permission to a hacker when you’re setting up a bug program; but there’s a fine line; it’s still a possible felony. When you’re thinking about it from the perspective of the DoD, you need to preserve the ability to go after a nation state, criminal actor, or any bad actor. So it’s a different kind of equity when you are creating the legalese.

I do this now with UK govt, mapping to specific laws: preserving litigative power while giving permission.

JGC: let’s talk about bug bounties themselves. What is it / how doe sit work?

BS: In short: allowing hackers to hack programs and having open communication line with them. Taking the step to allow hackers to be able to enter an application.

JGC: And you get paid… so there’s a market for this stuff out there. Who is competing in this market?

KM: I prefer to think of it as “offense market”. The highest prices are usually here. They are paying for both expertise and longevity of the bug.

Not about selling to highest bidder. It’s about compensation, recognition, and pursuit of intellectual happiness is why many hackers pursue this. The defensive market is lower paid. Price is not the competition factor. You will create a situation where you cannot eventually employ your engineers.

So I look at: how do you find other levers than price?

JGC: What was your motivation for getting into hacking?

BS: First, curiosity. Then, to be able to help, knowing that i could make a difference. Third: the money aspect.

JGC: How do we create right Bug Bounty program for a company looking for it?

KM: My company prevents premature bountification; organizations come to me and say they’e never had a bug reported.

I make sure that companies have enough automation on back end; there are more efficient ways than starting a bug bounty program to discover vulnerabilities.
This is much more than how you found out about the bug.

JGC: How do you find and motivate the right hackers? Don’t you get a lot of low-hanging fruit?

KM: There are good examples of open source. how do you explain Heartbleed, a bug that has been sitting in such a popular codebase for two years? How do you attract skilled eyes and focus them where you want them? Microsoft was receiving 200,000 non-spam e-mails about bugs. It is about understanding behavioral economics at play as opposed to gauging how much a project was worth and setting a price tag.

JGC: Ben, what do you think about recent Equifax breach? What can companies like that do to protect themselves from people like you?

BS: That’s a broad question. For me, I look for default settings.
Having a process of keeping these things updated.
Changing settings from default.

JGC: A lot of things get broken into; it’s not necessarily a sophisticated hack; it’s that the software wasn’t updated, and so on. Do bug bounties help with that? Or are there better ways?
Do bug bounties help with that?

BS: Yes; but they aren't’ the only solution. Maybe the default password has been sitting there for years and no one has changed them. When Bug Bounties find those things, we fix them, but not the only solution

JGC: How else can hackers help me get stronger?

KM: No matter how you find out about the bug, that’s not the problem to be solved.

Wherever you learn about the vulnerability is not the problem to be solved.

A bug bounty is one approach; but if a bug bounty shows a ton of low-hanging fruit, you could have found an intern to do that work.

There are more efficient things that you can do. A bug bounty is useful in giving a quick snapshot of the system. It’s useful in proving a point and showing for sure that vulnerabilities exist.

Even as consumers, there is an inundation of bugs that we all have to deal with even if we don't’ create software. There will be bugs that affect us as consumers. How do you as a consumer make a risk-based decision? Corporations make those same decisions; bug bounty help focus on what is most likely to get triggered and reconfigure.

JGC: During the presidential debates last year, Trump said that the hackings could be a guy in his basement. So who is hacking things?

KM: “Everybody is hacking everything.” We got the word espionage from the French; so “Hacking is just a new tool in the toolbox.”

We just happen to have our own equities that we need to protect along with our allies.

JGC: there is an informational imbalance between countries. When we think about spying as second oldest profession, seems like hacking must have been around for a long time.
What would be your advice about protecting myself as a business from a hacker?

KM: “Nail the basics.” We keep talking about vulnerability coordination, and a bug being found and a vendor fixing that bug. What about fix deployment? How do we deal with that? Figure out patch management, your risks and tradeoffs, and your regulatory environment. What are mitigations?
You should deploy a number of tests before you’re allowed to deploy that test.

JGC: What does it feel like to hack into something?

BS: it feels great. It’s great to be able to figure something out blindly.

Q: When I buy a car, I can look at safety ratings. A 5 star rating means you’re less likely to get killed in a crash.
Is there a way of ensuring computer security in this way?

KM: Gosh, wouldn't that be great! There’s been some talk about cyber UL - consumer type ratings; but it gets very complicated very quickly. Just counting bugs, for example. Do you count a root bug cause as one bug? Taxonomy is complicated; rating is complicated.

How do you count and rate? What does it mean once you rate? As new vulnerabilities are found, how do you deal with 5-star product? When we do have smart toasters, my plan is to have a dumb toaster.

Q: Can you re-explain “offensive vs. defensive market”?
KM: offense market is the purchasing of vulnerabilities or exploits in order to use them for an attack, defense market are things like bug bounty programs or third party vulnerability acquisition services.

I define it by: what are you buying a vulnerability for?

Q: In terms of policy decisions, what should voters be looking out for?

KM: there was a recent proposed bill that DHS should run a bug bounty. I am opposed; you should be too. You cannot legislate a smoothly-run bug bounty program.

I worry about alliterative marketing: popularizing one method.

What i worry about are these regulators thinking that everything is now a hammer that can be hit by a bug bounty nail.

Also, there is proposed legislation about wanting to know ingredient list of all software before fed govt buys it.

Now take that to its logical conclusion. A manufacturer of a submarine will now not just have to know the ingredient list of every component, but…

Important to keep congress in tune with smart new tech policy choices, not just what’s trendy or in the latest news.

All our sessions will be streamed live! If you can't make it to Summit, here's the link:

Categories: Technology

A Cloud Without Handcuffs

Thu, 14/09/2017 - 20:01

Brandon Philips, Co-Founder & CTO, CoreOS, and Joe Beda, CTO, Heptio, & Co-Founder, Kubernetes

Moderator: Alex Dyner, Co-Founder & COO, Cloudflare

Photo by Cloudflare Staff

We’re exploring increasing risk of few companies locking in customers gaining more power over time.

AD: I want to hear your stories about how you got into what you do.

JB: Kubernetes faced problem of either having googlers use rbs or bring X to rest of world. We wanted to have Googlers and outside people using something similar. We chose to do it as open source because you play a different game when you’re the underdog. Through open source we could garner interest. We wanted to provide applicational mobility.

AD: Brandon, talk about your mission and why you started company.

BP: We started CoreOS four years ago; We spent a lot of time thinking about this problem and containers were natural choice. They are necessary for achieving our mission. We wanted to allow people to have mobility around their applications. We wanted to enable new security model through containers. So we started building a product portfolio

AD: There are tradeoffs between using a container or an open source tech; how do you think about those tradeoffs?

BP: First Kubernetes is providing application-centric view. The abstraction is: how do we create a platform? Also, how to build useful integrations?

The project tries to build useful integrations. It’s really about that initial abstraction.

JB: One useful comparison is Kubernetes for is a kernel for system. There is a feeling that we want to keep Kubernetes as flexible kernel, while recognizing that you have to build integrations & user mode on top of it.

AD: How do you talk about different levels (developer, operational)?

JB: The advice i give is that lock-in is unavoidable. The question is: What is the risk of that lock-in? You have to weigh that risk against the benefits. If you’re a startup, you’re not worried about the risk of moving away from a public cloud network. Vs. very large company. There are certain types of lock-in that present problem for operations vs. development teams. Kubernetes makes it an operational problem versus a developmental problem.

BP: Operational: by using Kubernetes, people can bring up dev environments and test on internal infrastructure in our office. This is already providing value.

On the app side, risk comes in when cloud providers build databases where data is tied to the data center. Abstraction allows developers to be free from data center.

AD: How does that work over time?

BP: For many organizations it comes down to cost benefit analysis. They look at their application code, figure out how long they’re locked-in. Leverage only comes when you can call a bluff.
Basically a business decision.

JB: It’s a new type of technical debt.
There is no one answer.

AD: As less people can do this, salaries of mainframe programmers are going up now; what do you think about that?

JP: There is an analogy between the big public clouds and the legacy mainframe

Legacy mainframe vs. public cloud. Even if no longer preferred choice, it will have a long future. It’s here to stay, even if world moves on.

BP: The larger companies will be competing against the major tech companies that run clouds. We don’t have a term. Is it “cloud debt”? Cloud technical debt? It’s a nascent topic but becoming important.
A new challenge .

JB: Data gravity.

AD: A lot of this is about Amazon---are other large vendors approaching this because of their market position?

JB: Amazon is the big elephant for sure. But this goes beyond Amazon. When you look at Kubernetes in containers, it provides a model that did not exist before Amazon. Amazon has been struggling to find balance between infrastructure and ease of use.

So what is making this layer of infrastructure so interesting is not just multi-cloud strategy, but a different way of thinking about programming and automating applications.

The interesting stuff is how we utilize this new tool set.

BP: It’s about making and ensuring the tech works across the board. When Kubernetes started the tech wasn’t there yet for it to run on Amazon. One of our first challenges was to make it possible to get Kubernetes on Amazon. It’s an ongoing technological battle to figure out abstractions and making cloud providers innovators themselves in data and network storage, etc.

AD: What’s the counter to, yes, CoreOS will help me not get locked into Amazon?

BP: Customers are getting APIs. We’re giving customers an API that we don’t modify and they get upstream Kubernetes. We take open source software and integrate it; they can put that integration into their own apps.
It’s about taking pieces and providing an adhesive experience.

Not just infrastructure but application monitoring
A lot of value of the cloud is that it automates operations.

We provide you with open source software that is automated.

Software venders have to start providing value proposition of resecuring infrastructure when a vulnerability appears in the cloud. “Zero-toil automation”


Q: Customers with critical applications usually use multiple networks; is this one value proposition of the cloud lock-in argument?

BP: we have seen both; it Depends on their internal risk assessment. You can have beautiful architecture about how your business will survive but if you don’t have applications around it, it’s all pointless.

JB: Geography is important. Having a substrate to write app against is important.

BP: It will be interesting as we see global distribution of compute network and storage, the different cost-benefit analyses that will be available. A lot of competition will arise outside of the US in terms of building data centers.

All our sessions will be streamed live! If you can't make it to Summit, here's the link:

Categories: Technology

Making Mobile Faster than Fixed Line

Thu, 14/09/2017 - 19:38

Cole Crawford, Founder & CEO, Vapor IO, and Chaitali Sengupta, Consultant, Qualcomm Datacenter Technologies

Moderator: Michelle Zatlyn, Co-Founder & COO, Cloudflare

Photo by Cloudflare Staff

CC: moved between private and public sector.

CS: her company added 100 million customers in India.

MZ: Let’s start with where we are today: trends or things you’re seeing in the marketplace that weren’t there 5 years ago.

CC: What’s interesting is combination of data mass and data velocity, resulting in a more dynamic internet. E.g. Latency wasn’t mentioned by customers at first; AI is helping to create a new low-latency internet.

CS: One of the biggest things is applying lessons of cloud to telecom to see how we can make systems more centralized and virtualized. Network function virtualization; putting things on general service servers. Now dovetailing into 5G, where we see more bandwidth.

MZ: We’re currently in 4G world; when will 5G standard get finalized?

CS: Standards are getting finalized; trials are getting started. Many 5G systems are up and running NWC America ... is running trials already. I would say end of next year or 2019

MZ: So the future is here and it’s almost distributed? 4G took 2 years to roll out. Will it take another 2 years?

CC: It won’t. T-Mobile announced last week that what once took 24 months will now take 6 months.

MZ: Why a fraction of the time?

CC: New technologies. For all of the “nervous system” of AI, we also have to take care of heartbeat / “cardiovascular” Consider Facebook, who has bay stations now; they can save billions of dollars by innovating. Companies don’t want to be out-innovated.

CS: There is an Industry-wide understanding on need to do more automation, in the last 5 years. This is making things simpler.

MZ: So competition is helping drive innovation. Let’s talk about the data center technologies;

What’s on the horizon for bay stations?

CS: Bay station is everything that stands between your device and the information it is trying to access or send. As that system has become more complex, it has become desegregated. Now you have something at the tower, and something in a more centralized location. This trend is continuing. The trend is for all of that to become more centralized.

We need to be pragmatic; we can’t just keep everything on the cloud.

So it’s an engineering optimization problem. And it’s really breaking the bay station apart.

CC: So what is a bay station: a motherboard, a radio, and antenna and an network interface card. we are seeing decentralization of these functions.
The edge will not kill the cloud; it will augment cloud. The true edge is the radio access network, the meeting point between wireless and wire lines. Analogy with airline industry. That connection is material. The bay station will allow a massive virtualization. The computer sitting at the top of the tower will move down to the base and be virtualized; we will end up in a decentralized world with a metro area network that is for more geographically localized.

MZ: So, also lots of opportunity for innovation. What other innovations are you seeing?

CC: A lot of innovation led by telcos. The idea is to make an intelligent relationship that can be monetized.
Serverless functions will be married to network functions.
The FCC and net neutrality is the best thing that could happen. This is good for business. A lot of innovation is happening in terms of moving to intelligent connection.

CS: I see a lot of innovation down to SOC level.
Machine learning will become a way of doing things, along with general purpose processors.

MZ: Let’s go back to what you said about data and how that is driving things. Give us the context: How is data increasing exponentially and what does that mean?

CC: We’ve been promised a cool virtual world; can machines make moral decisions
On the technological side, you can’t defeat the speed of light. E.g. The human an eye can see 150 degrees vertically and 180 horizontally. 5.4 gigs of data a second.

To deliver a truly augmented reality experience, you need a very different type of internet than what we have today. You need a sub 7 millisecond decision.

There are technological boundaries we are trying to overcome; but it means a fundamental re-architecture of what we have today.

MZ: So more decentralization?

CC: You have to be closer to the radio access network. As you move across the city, you move to another tower to get your cell signal back. The amount of data velocity and proximity is far more dynamic than what we experience today.

MZ: How has regulation driven development; and what is on the horizon from a regulation standpoint?

CS: Regulations are separate from market and tech forces, but it is also geography-specific because of different interests. So one example in India is the question of user identity. In India, the social security system is in progress. So your mobile number becomes your de-facto identity. User identity is geography-specific.

CC: “if you can legally circumvent regulation, do it. It’s hard to follow the rules when the rules move so slowly.” Companies like Facebook are investing in bay stations
In the IOT world: all of these things get terminated in bay stations. Some municipalities move faster than others. This is less federal than municipal movement.

Or, another way to think about it, “state trumps federal.”


Q: I live in the flatlands of Palo Alto and I can’t get service. How do we ensure service is reliable when moving from cell to cell?

CC: Small-cell 5G innovation will happen soon, one of de facto standards being built in. What carrier you are on and how they are investing in small-cell tech will affect that.

CS: Also the tracking of where are dark areas?

Q: We’re in a monopoly of operating systems, and the consumer has no choice to be outside of Ios or android. Will there be more consumer choice in the future? So consumer metadata is not tied to one of 2 companies.

CC: Arthur C. Clark said if what I say seems reasonable to you then I will have failed; but if what I say sounds unreasonable then we may have an understanding of what the future will be built for.
We’re not far away from

Humans are terrible at the future value propositions of tech; but certainly people are thinking about how to prevent lock-in.

CS: In India, idea to have 4G phone where you don’t need. In some markets, you don’t need to have everything on your phone all the time. That leads to new ecosystems and new ways of thinking.

Q: Can you segue your metadata from Apple and Google?

CC: Look how long it took Android to catch up to app developer ecosystem. Developers are where gravity happens.

All our sessions will be streamed live! If you can't make it to Summit, here's the link:

Categories: Technology

Disruptive Cryptography: Post-Quantum & Machine Learning With Encrypted Data

Thu, 14/09/2017 - 19:25

Shay Gueron, Associate Professor of Mathematics, University of Haifa, Israel, and Raluca Ada Popa, Assistant Professor of Computer Science, UC Berkeley

Moderator: John Graham-Cumming, CTO, Cloudflare

Photo by Cloudflare Staff

Raluca is also a Co-Director of the RISELab at UC Berkeley as well as Co-Founder and CTO of a cybersecurity startup called PreVeil. She developed practical systems that protect data confidentiality by computing over encrypted data as well as designed new encryption schemes that underlie these systems.

Shay was previously a Senior Principal Engineer, serving as Intel’s Senior Cryptographer and is now senior principal at AWS, and an expert in post-quantum, security, and algorithms.

JGC: Tell us about what you actually do.

RP: Computing on encrypted data is not just theoretical; it’s also exciting because you can keep data encrypted in the cloud. It covers hacking attacks while still enabling the functionality of the system. This is exciting because we can cover so many hacking attacks in one shot.

SG: I’m working on making new algorithms; also on making solutions for quantum computers that are increasingly strong.

SG: I’ve been working on cryptography: making it faster, recently I’ve been thinking about solutions for what will happen when we have a quantum computer strong enough to threaten the known methods for cryptography.

JGC: Why are we worrying ahead of time?

SG: Protocols and implementations have been improved; performance on processors allows for most things to be encrypted. We are entering a stable situation. But right now, there is a new threat where there may be quantum computers that can solve difficult problems. This means that we need to start thinking about a replacement to the current cryptography.

RP: If someone is saving encrypted communications now, they could decrypt past conversations that could still be relevant in the future.

JGC: We don’t have the quantum computer yet but we already have the programs that will run on it.

SG: Cryptography is based on a belief in “reduction of a difficult problem.” All cryptography is based on a belief that something is difficult to do; based on this there are theoretical works that run “if… then”; but there is no robust proof that factorization is difficult, or that solving a particular problem is hard. We are just not smart enough yet.

JGC: Talk about this concept that there are classes of problems that are hard.

RP: There are classes of problems. There are many studies that people used to boost their confidence about specific algorithms.

JGC: Why can't we just make keys bigger to deal with quantum threat?

SG: We have to be practical in some sense. The amount of traffic that occurs prior to encrypting data is significant. This causes computational burdens.

RP: Shor’s algorithm is particularly effective; it can break certain properties of RSA. This is not the same for symmetrical cryptography, where increasing the key is more hopeful.

JGC: So what are we going to fix today?

SG: When you establish communications, first we agree on crypto-ciphers. The symmetric key will be used for encryption based on algorithm and signatures. Signatures are more urgent. For the symmetric key encryption, we can start today, because the quantum algorithms can’t reverse the key.

JGC: Give us an idea of what kinds of things you can do without decrypting something?

RP: In theory, you can compute any function without decrypting. We can do specialized computations effectively and machine learning on encrypted data.

For instance: How can you do summation of encrypted data? You get encryption of the sum. It’s not difficult to do an encryption summation. There are practical examples: startups, doc sharing in email; there are many solutions for classes of computation that apply to products we are using today.

So there are services for all sorts of classes of computation out there.

SG: But some of those encryption systems also depend on difficulty of factorization.

JGC: How fast will it be before companies become “post-quantum certified’?

RP: For certain classes of computation it is happening quickly, but there are still many factors making that difficult. For specialized classes of computations, it should happen in the near future … hopefully within the next 5 years. Why? Because encrypted computation brings new functionality. I.e. sharing encrypted data across hospitals to measure effectiveness of cancer treatments and enable new studies.

Encrypted computation brings you new functionality. A lot of businesses can’t share data: for instance, medical companies - which means they cannot help their patients as effectively, so we’ll be able to do many more studies when we can enable this encrypted computation.

SG: There is a call for proposals by NIST for quantum-resistant algorithms. They estimate that this will be a 5-year process. Industry will have to start integrating; the safe way would be to do both: If you want to do a key exchange, you do the classical and the quantum resistant one.

JGC: How long before we create a quantum computer?

SG: The question is how long it will take before they are strong enough… this will take some time. But there is a lot of motivation.

Quantum computing is not designed to break cryptography, but to go some good. Many industries and governments are trying to do this right now. It’s a race against the human mind.

JGC: One of the arguments against new cryptography is that it is slow. Are there costs?

RP: Certainly; what’s sped up encryption are hardware implementations. There are already startups trying to build specialized hardware for advanced encryption.

RP: For the masses to enjoy acceleration, you would need quantum computers for the masses. To speed up usage, you need quantum computers for the masses.

JGC: If there are quantum computers for the masses, what will I get?

SG: You can get better AI, faster searches.

JGC: Tell us about quantum encryption vs. quantum computing: for instance, the Chinese sending data between two satellites

RP: You’d need a lot of quantum computers, but to break it you’d just need a few.
A widespread adoption of quantum encryption is going to be much slower.

Q&A: What is lattice-based cryptography?

Why do the two of your domains intersect?

RP: Lattices are much more expressive in terms of the computations they can do. Lattices are more resilient to quantum attackers and classical algorithms.

SG: We have no idea how to solve lattice problems, even if we had quantum computers. New cryptography is trying to solve these issues.

This is why the new cryptography is trying to build on these problems in the hopes we can come up with an algorithm.

A quantum computer is not going to allow you to do a million computations.

JGC: What would you like the audience to take away from this session?

RP: Mainly, encrypted computation is practical. There are actually practical solutions; it can enable new functionalities. Secondly, you can enable interesting studies (medical, financial) with encrypted computation.

SG: People shouldn’t worry about quantum-resistant encryption. We’re working on it.

So it can enable new functionalities for you.


Q: What advice for people who want to make cheap, future-proof “internet of things” devices?

SG: There is a set of algorithms that are known to be secure against quantum attacks. These are hash-based signatures. These are slow, but practical solutions. But in general, I’d like to say: Don’t lose any sleep over the threat of quantum computers; it will happen gradually. There is still time to prepare.

RP: I agree; but do start thinking about it. First get Internet of Things right; then worry about the quantum part.

Q: What are some primitives that are missing in programming language that allow you to build easily? How to balance security with programming?

RP: We have some libraries; there are some.

Q: What do you think of the quantum resistant crypto put into the Chrome browser’s TLS stack? Will secrets stand up to a quantum computer?

SG: This experiment was already performed by Google, They wanted to test what would happen to the overhead. This particular algorithm was just an exercise to see what would happen in reality, if you do both classical and quantum-safe key exchange. Conclusion: yes, we can handle it.

All our sessions will be streamed live! If you can't make it to Summit, here's the link:

Categories: Technology

What Will AI Mean for Everyday Life?

Thu, 14/09/2017 - 19:13

Willie Tejada, Chief Developer Advocate, IBM and Anthony Goldbloom, CEO, Kaggle

Moderator: Jen Taylor, Head of Product, Cloudflare

Photo by Cloudflare Staff

JT: Our focus today is really what does AI mean for everyday life. I’m hearing a lot about AI. What is your assessment about where we are and how it is making a difference?

WT: we’re in an unprecedented, interesting era. From a consumer perspective, negative connotation.
It’s an interesting era we are in; these technologies are going to do a tremendous amount in terms of consumers selecting what they buy, Helping patient-centric care.

Combination of data set & availability of resources is fueling AI.

You might hear 90% of the world’s data has been created in the past two years. AI will help us deal with that kind of information overload.

The big difference with programming systems is that AI knows how to understand, reason, learn, interact.

AG: There is a set of techniques through which we can more accurately predict fraud, insurance plans, credit scoring.

This is a jump in the past 15 years.

5 years ago, the ability to do very exciting things with unstructured data, i.e. automating radiology. Then digital networks came along and then we had use case after use case.

AI has lots of programmatic uses.

WT: Algorithms are contributing to oncology.

You take a look at things we’ve done in oncology as an example: the ability to train a system;

Effect is based on training sets that AI is being fed. Are we using the right humans to train these systems?
We need to hold the systems to the same standards we hold humans.

Design principle: It’s always assisted. Not replacement.

JT: Do you feel that the future is assisted?

AG: I don’t necessarily agree that future is assisted. Repetitive tasks can be automated, e.g. radiology. End result is that algorithms are out for anything that involves repetition or mundane tasks.

Humans spend a lot of time doing repetitive things. I think algorithms will be our future radiologists. Any job that demands a lot of repetition, for instance, auditing, mundane legal tasks.

There is probably an element of combination between routine algorithm and more challenging cases for humans

Eventually the algorithm will do the routine, simple tasks, and the more challenging tasks will be given to humans.

WT: I agree. “We’re getting humans to raise their game”. The idea is to get rid of commodity tasks. When I call Comcast, I go through the same questions all over again. So how do you reduce the time from 1 1/2 days to five minutes? How do we get humans to find solutions to more complex problems?

Even in a scenario in which AI is playing a game, AI takes care of the commodity moves, and the final win from the human.
Creativity comes from humans vs. commodity

JT: Great leap forward is to process unstructured data and give insights; Give insights to someone who may only see a small sliver.

WT: Especially important in life-sciences;
They are unstructured data: handwritten reports, etc.

10k new articles on clinical trials

AG: Lets say radiologists can look at 3,000 images a year; AI can look at 3,000 a second.
As long as task is suitable, AI can achieve objective.

Machines have an unfair advantage.

JT: what should we be doing as a community to realize potential benefits of AI in everyday life?

AG: I’m at Google and I think Google does this effectively: when we use the voice assistant or photos we search for our photos by “search my name” and it finds pictures of me. That’s the Google brain team going out and infusing products with digital awareness.

There is a shift that companies should make: being willing to shift to need a handful of outstanding machine learners rather than many; think the right way about talent - small, extremely talented teams.

WT: The developer is this era’s doctor/engineer.

Dominantly, data and application team now work collaboratively. More need for data science.

The data team and application team used to work separately; now there’s more need for data scientists and the collaboration of those teams. Data is the fuel for AI; so that’s an important dynamic to think about.

Those roles didn’t exist until recently. As we go into next phase, new roles and division of labor will come up.

Building a team with tremendous expertise is necessary.

JT: Looking forward, how far should we and can we be taking AI?

AG: “The future is already here; it’s just not widely distributed”
Challenges are mainly organizational.
Google brain is an example of how you productionize machine learning in a company.

Also, reinforcement learning: There is a technique now that automates trial and error; it plays enough games that it learns what gets it a good score.

We’re starting to see AI as input to stock trading, ad targeting, etc.
Generative models are a new area of machine learning: they’ll take an image and be able to write a caption for that image. Visually impaired people can now describe scenes using AI.

Digital networks are starting to make their way into existing use cases.

WT: There is no reason to believe that something like tax codes can’t be replaced by AI. In the future, systems will have embedded AI; Internet can provide access to these systems at a commodity level.

JT: You’ve talked about replacing commodity activities, used more broadly. I think about the development of trust that it will take for these technologies to become widespread. I’m skeptical.

JT: how can we develop trust ? How should we think about building trust for broader adoption?

AG: There’s the issue of the market being ready. Do i trust my cancer diagnosis from a machine? Building trust is case specific.

Let’s use radiology: You can start by having a machine operate alongside a human; look at the agreement rates. With medical diagnosis you eventually know for sure. So over time, articles are published, the machine as a track record. If it’s high-performing, maybe it takes over. There is no general answer; very use-case specific.

WT: Agree. I think you have to build these on some principles, ie transparency. As a consumer you want to know if you are dealing with a human or a system. If a system, who taught it? And when it generates a recommendation, you want to know the data set that recommendation was generated from. Human-assisted is important; will yield the type of system people can trust.

JT: Also, no human is perfect; so what are our expectations of the system vs. the human?

AG: “And no algorithm is perfect.” The Tesla will have an accident, just as humans do. The question is: Does the Tesla have accidents at a lower rate than humans? You can be sure that when a non-human has an accident, we’ll view it differently.


Q: I’m a physician at Stanford. Doctors spend only 30% of our time taking care of patients, 70% on data. Are we developing AI at the expense of human intelligence?

WT: not at the expense---how is AI giving you more time to actually make you more efficient and give you data to better make decisions? Data-entry will be taken
In some cases, we’re giving you more leverage in terms of your data set and efficiency. You won’t have to key in data; it will be learned / read / listened to by a machine.

AG: Future lives will be more interesting when you take mundane, repetitive tasks away.

Let’s say a our mundane roles go away; does that mean fewer of us are needed? Historically we’ve gone through waves of automation and more professions are created. It’s hard to know in this case; is the disruption happening too quickly to adapt? It’s a little scary. If the structure does change, I think all our lives will be more interesting minus the mundane tasks.

Q: I have a 20-year old daughter in college. With so many jobs potentially being replaced, what career advice do you have for her?

AG: Computer programming and machine learning are good bets.
If the job involves creativity, and connecting dots in disparate ways, no machine learning technique can replace that even remotely. I don’t know any machine learning techniques capable of doing that.

WT: In the major revolutions, there’s always been the fear that occupations will be replaced.
We’re in the same era.

Q: Use-cases have been scientific and medical; what about social and political limitations of AI implementations? E.g. Law that involves rules that are like algorithms: would you be willing to replace jury trials with AI, why or why not?

AG: At low levels of the legal system, yes; it’s only once you get to the Supreme Court that those should still be conducted by humans. But there are so many rote cases that come to court again and again; so it seems feasible.
Rote cases could very well be replaced by AI.

WT: Reasoning is still important. You may have data sets that assist the jury to help them make a better decision; but you can’t replace the human factor on the judgment call.

All our sessions will be streamed live! If you can't make it to Summit, here's the link:

Categories: Technology


Additional Terms