Blogroll

I read blogs, as well as write one. The 'blogroll' on this site reproduces some posts from some of the people I enjoy reading.

Disclaimer: Reproducing an article here need not necessarily imply agreement or endorsement!

So government isn't very good at running research programs then?

Adam Smith Institute - 4 hours 30 min ago

The standard argument in favour of government running research programs is that the product, knowledge, is a public good. That is, it's non-rivalrous and non-excludable and thus the private sector will underproduce it. Simply on the grounds that non-rivalrous and non-excludable goods are difficult to profit from and thus a profit seeking private sector won't do very much of that activity. Thus government should step in to produce the socially optimal amount of whatever it is.

There are most certainly areas where we agree with the argument. It is exactly the logic which produces the patent and copyright system for example. However, the wider logic of government intervention in the provision of public goods is not the same as concluding that government must provide that item. We think, for example, of the herd immunity provided by a vaccination campaign. The US does it largely by insisting the children cannot enter public education without having been vaccinated - the UK by the NHS directly providing the vaccinations. We think that second system works a little better. But that is not necessarily true of all public goods.

Which brings us to biomedical research. The Zuckerbergs are funding $3 billion of such. This is welcomed as the field currently rather suffers

Success rates for NIH grant applications are at the lowest they’ve been going as far back as the 1970s. When the money for science is this tight, researchers don’t take big risks. Instead of making innovative leaps in science, researchers early in their careers are typically among the most risk averse, taking on bits of studies designed by their senior mentors. Writing a successful grant application often requires preliminary data – in other words, you need to have already done a chunk of the research you’re proposing to do. Even then, about 20-25% of academic biomedical researchers’ time (in my experience) is spent applying for grants to support their projects. Much of their mental effort goes into grantsmanship, which is not at all the same thing as creativity.

Academic researchers are promoted on the basis of “achievement” – grants won and papers published. Volume is what matters here, not necessarily impact. According to Adam Grant, a professor of organizational psychology at the Wharton School of Business, “The greatest originals are the ones who fail the most, because they’re the ones who try the most.” But biomedical researchers can’t afford to have failed experiments because they’re not publishable. Furthermore, they need to take as much of the credit as possible for that “productivity” to count towards their advancement, so there’s an incentive against working with too many other people. Biomedical research is highly siloed in parallel with the grants funding it. An added challenge is that the gold standard for medical research – the randomized clinical trial, ideally conducted in multiple sites and settings – is very expensive.

The NIH spends 10 times as much per year as that entire Zuckerberg gift. And yet we're told that government does this job of funding research rather badly.

Or as we might put it more widely. That we've identified a possible market failure does not mean that government is the solution - for there is such a thing as government failure too.

Categories: Current Affairs

How we brought HTTPS Everywhere to the cloud (part 1)

CloudFlare - Sat, 24/09/2016 - 16:46

CloudFlare's mission is to make HTTPS accessible for all our customers. It provides security for their websites, improved ranking on search engines, better performance with HTTP/2, and access to browser features such as geolocation that are being deprecated for plaintext HTTP. With Universal SSL or similar features, a simple button click can now enable encryption for a website.

Unfortunately, as described in a previous blog post, this is only half of the problem. To make sure that a page is secure and can't be controlled or eavesdropped by third-parties, browsers must ensure that not only the page itself but also all its dependencies are loaded via secure channels. Page elements that don't fulfill this requirement are called mixed content and can either result in the entire page being reported as insecure or even completely blocked, thus breaking the page for the end user.

What can we do about it?

When we conceived the Automatic HTTPS Rewrites project, we aimed to automatically reduce the amount of mixed content on customers' web pages without breaking their websites and without any delay noticeable by end users while receiving a page that is being rewritten on the fly.

A naive way to do this would be to just rewrite http:// links to https:// or let browsers do that with Upgrade-Insecure-Requests directive.

Unfortunately, such approach is very fragile and unsafe unless you're sure that

  1. Each single HTTP sub-resource is also available via HTTPS.
  2. It's available at the exact same domain and path after protocol upgrade (more often than you might think that's not the case).

If either of these conditions is unmet, you end up rewriting resources to non-existing URLs and breaking important page dependencies.

Thus we decided to take a look at the existing solutions.

How are these problems solved already?

Many security aware people use the HTTPS Everywhere browser extension to avoid those kinds of issues. HTTPS Everywhere contains a well-maintained database from the Electronic Frontier Foundation that contains all sorts of mappings for popular websites that safely rewrite HTTP versions of resources to HTTPS only when it can be done without breaking the page.

However, most users are either not aware of it or are not even able to use it, for example, on mobile browsers.

CC BY 2.0 image by Jared Tarbell

So we decided to flip the model around. Instead of re-writing URLs in the browser, we would re-write them inside the CloudFlare reverse proxy. By taking advantage of the existing database on the server-side, website owners could turn it on and all their users would instantly benefit from HTTPS rewriting. The fact that it’s automatic is especially useful for websites with user-generated content where it's not trivial to find and fix all the cases of inserted insecure third-party content.

At our scale, we obviously couldn't use the existing JavaScript rewriter. The performance challenges for a browser extension which can find, match and cache rules lazily as a user opens websites, are very different from those of a CDN server that handles millions of requests per second. We usually don't get a chance to rewrite them before they hit the cache either, as many pages are dynamically generated on the origin server and go straight through us to the client.

That means, to take advantage of the database, we needed to learn how the existing implementation works and create our own in the form of a native library that could work without delays under our load. Let's do the same here.

How does HTTPS Everywhere know what to rewrite?

HTTPS Everywhere rulesets can be found in src/chrome/content/rules folder of the official repository. They are organized as XML files, each for their own set of hosts (with few exclusions). This allows users with basic technical skills to write and contribute missing rules to the database on their own.

Each ruleset is an XML file of the following structure:

<ruleset name="example.org"> <!-- Target domains --> <target host="*.example.org" /> <!-- Exclusions --> <exclusion pattern="^http://example\.org/i-am-http-only" /> <!-- Rewrite rules --> <rule from="^http://(www\.)?example\.org/" to="https://$1example.org/" /> </ruleset>

At the moment of writing, the HTTPS Everywhere database consists of ~22K such rulesets covering ~113K domain wildcards with ~32K rewrite rules and exclusions.

For performance reasons, we can't keep all those ruleset XMLs in memory, go through nodes, check each wildcard, perform replacements based on specific string format and so on. All that work would introduce significant delays in page processing and increase memory consumption on our servers. That's why we had to perform some compile-time tricks for each type of node to ensure that rewriting is smooth and fast for any user from the very first request.

Let's walk through those nodes and see what can be done in each specific case.

Target domains

First of all, we get target elements which describe domain wildcards that current ruleset potentially covers.

<target host="*.example.org" />

If a wildcard is used, it can be either left-side or right-side.

Left-side wildcard like *.example.org covers any hostname which has example.org as a suffix - no matter how many subdomain levels you have.

Right-side wildcard like example.* covers only one level instead so that subdomains with the same beginning but one unexpected domain level are not accidentally caught. For example, the Google ruleset, among others, uses the google.* wildcard and it should match google.com, google.ru, google.es etc. but not google.mywebsite.com.

Note that a single host can be covered by several different rulesets as wildcards can overlap, so the rewriter should be given entire database in order to find a correct replacement. Still, matching hostname allows to instantly reduce all ~22,000 rulesets to only 3-5 which we can deal with more easily.

Matching wildcards at runtime one-by-one is, of course, possible, but very inefficient with ~113K domain wildcards (and, as we noted above, one domain can match several rulesets, so we can't even bail out early). We need to find a better way.

CC BY 2.0 image by vige

We use Ragel to build fast lexers in other pieces of our code. Ragel is a state machine compiler which takes grammars and actions described with its own syntax and generates source code in a given programming language as an output. We decided to use it here too and wrote a script that generates a Ragel grammar from our set of wildcards. In turn, Ragel converts it into C code of a state machine capable of going through characters of URLs, matching hosts and invoking custom handler on each found ruleset.

This leads us to another interesting problem. At the moment of writing among 113K domain wildcards we have 4.7K that have a left wildcard and less than 200 which have a right wildcard. Left wildcards are expensive in state machines (including regular expressions) as they cause DFA space explosion during compilation so Ragel got stuck for more than 10 minutes without giving any result - trying to analyze all the *. prefixes and merge all the possible states where they can go, resulting in a complex tree.

Instead, if we choose to look from the end of the host, we can significantly simplify the state tree (as only 200 wildcards need to be checked separately now instead of 4.7K), thus reducing compile time to less than 20 seconds.

Let's take an oversimplified example to understand the difference. Say, we have following target wildcards (3 left-wildcards against 1 right-wildcard and 1 simple host):

<target host="*.google.com" /> <target host="*.google.co.uk" /> <target host="*.google.es" /> <target host="google.*" /> <target host="google.com" />

If we build Ragel state machine directly from those:

%%{ machine hosts; host_part = (alnum | [_\-])+; main := ( any+ '.google.com' | any+ '.google.co.uk' | any+ '.google.es' | 'google.' host_part | 'google.com.ua' ); }%%

We will get the following state graph:

You can see that the graph is already pretty complex as each starting character, even g which is an explicit starting character of 'google.' and 'google.com' strings, still needs to simultaneously go also into any+ matches. Even when you have already parsed the google. part of the host name, it can still correctly match any of the given wildcards whether as google.google.com, google.google.co.uk, google.google.es, google.tech or google.com.ua. This already blows up the complexity of the state machine, and we only took an oversimplified example with three left wildcards here.

However, if we simply reverse each rule in order to feed the string starting from the end:

%%{ machine hosts; host_part = (alnum | [_\-])+; main := ( 'moc.elgoog.' | 'ku.oc.elgoog.' | 'se.elgoog.' | host_part '.elgoog' | 'au.moc.elgoog' ); }%%

we can get much simpler graph and, consequently, significantly reduced graph build and matching times:

So now, all that we need is to do is to go through the host part in the URL, stop on / right after and start the machine backwards from this point. There is no need to waste time with in-memory string reversal as Ragel provides the getkey instruction for custom data access expressions which we can use for accessing characters in reverse order after we match the ending slash.

Here is animation of the full process:

After we've matched the host name and found potentially applicable rulesets, we need to ensure that we're not rewriting URLs which are not available via HTTPS.

Exclusions

Exclusion elements serve exactly this goal.

<exclusion pattern="^http://(www\.)?google\.com/analytics/" /> <exclusion pattern="^http://(www\.)?google\.com/imgres/" />

The rewriter needs to test against all the exclusion patterns before applying any actual rules. Otherwise, paths that have issues or can't be served over HTTPS will be incorrectly rewritten and will potentially break the website.

We don't care about matched groups nor do we care even which particular regular expression was matched, so as an extra optimization, instead of going through them one-by-one, we merge all the exclusion patterns in the ruleset into one regular expression that can be internally optimized by a regexp engine.

For example, for the exclusions above we can create the following regular expression, common parts of which can be merged internally by a regexp engine:

(^http://(www\.)?google\.com/analytics/)|(^http://(www\.)?google\.com/imgres/)

After that, in our action we just need to call pcre_exec without a match data destination – we don't care about matched groups, but only about completion status. If a URL matches a regular expression, we bail out of this action as following rewrites shouldn't be applied. After this, Ragel will automatically call another matched action (another ruleset) on its own until one is found.

Finally, after we both matched the host name and ensured that our URL is not covered by any exclusion patterns, we can go to the actual rewrite rules.

Rewrite rules

These rules are presented as JavaScript regular expressions and replacement patterns. The rewriter matches the URL against each of those regular expressions as soon as a host matches and a URL is not an exclusion.

<rule from="^http://(\w{2})\.wikipedia\.org/wiki/" to="https://secure.wikimedia.org/wikipedia/$1/wiki/" />

As soon as a match is found, the replacement is performed and the search can be stopped. Note: while exclusions cover dangerous replacements, it's totally possible and valid for the URL to not match any of actual rules - in that case it should be just left intact.

After the previous steps we are usually reduced only to couple of rules, so unlike in the case with exclusions, we don't apply any clever merging techniques for them. It turned out to be easier to go through them one-by-one rather than create a regexp engine specifically optimized for the case of multi-regexp replacements.

However, we don't want to waste time on regexp analysis and compilation on our edge server. This requires extra time during initialization and memory for carrying unnecessary text sources of regular expressions around. PCRE allows regular expressions to be precompiled into its own format using pcre_compile. Then, we gather all these compiled regular expressions into one binary file and link it using ld --format=binary - a neat option that tells linker to attach any given binary file as a named data resource available to the application.

CC BY 2.0 image by DaveBleasdale

The second part of the rule is the replacement pattern which uses the simplest feature of JavaScript regex replacement - number-based groups and has the form of https://www.google.com.$1/ which means that the resulting string should be concatenation of "https://www.google.com." with the matched group at position 1, and a "/".

Once again, we don't want to waste time performing repetitive analysis looking for dollar signs and converting string indexes to numeric representation at runtime. Instead, it's more efficient to split this pattern at compile-time into { "https://www.google.com.", "/" } static substrings plus an array of indexes which need to be inserted in between - in our case just { 1 }. Then, at runtime, we simply build a string going through both arrays one-by-one and concatenating strings with found matches.

Finally, after such string is built, it's inserted in place of the previous attribute value and sent to the client.

Wait, but what about testing?

Glad you asked.

The HTTPS Everywhere extension uses an automated checker that checks the validity of rewritten URLs on any change in ruleset. In order to do that, rulesets are required to contain special test elements that cover all the rewrite rules.

<test url="http://maps.google.com/" />

What we need to do on our side is to collect those test URLs, combined with our own auto-generated tests from wildcards, and to invoke both the HTTPS Everywhere built-in JavaScript rewriter and our own side-by-side to ensure that we're getting same results — URLs that should be left intact, are left intact with our implementation and URLs that are rewritten, are rewritten identically.

Can we fix even more mixed content?

After all this was done and tested, we decided to look around for other potential sources of guaranteed rewrites to extend our database.

And one such is HSTS preload list maintained by Google and used by all the major browsers. This list allows website owners who want to ensure that their website is never loaded via http://, to submit their hosts (optionally together with subdomains) and this way opt-in to auto-rewrite of any http:// references to https:// by a modern browser before even hitting the origin.

This means, the origin guarantees that the HTTPS version will be always available and will serve just the same content as HTTP - otherwise any resources referenced from it will simply break as the browser won't attempt to fallback to HTTP after domain is in the list. A perfect match for another ruleset!

As we already have a working solution and don't have any complexities around regular expressions in this list, we can download the JSON version of it directly from the Chromium source and convert to the same XML ruleset with wildcards and exclusions that our system already understands and handles, as part of the build process.

This way, both databases are merged and work together, rewriting even more URLs on customer websites without any major changes to the code.

That was quite a trip

It was... but it's not really the end of the story. You see, in order to provide safe and fast rewrites for everyone, and after analyzing the alternatives, we decided to write a new streaming HTML5 parser that became the core of this feature. We intend to use it for even more tasks in future to ensure that we can improve security and performance of our customers websites in even more ways.

However, it deserves a separate blog post, so stay tuned.

And remember - if you're into web performance, security or just excited about the possibility of working on features that do not break millions of pages every second - we're hiring!

P.S. We are incredibly grateful to the folks at the EFF who created the HTTPS Everywhere extension and worked with us on this project.

Categories: Technology

Aeneas, Anarchy, and America #3

Blog & Mablog - Sat, 24/09/2016 - 14:05
Introduction:

As we read Scripture carefully, we should note that there are many differences between the status of the Jews in Isaiah’s day, for example, and our condition. At the same time, we have to realize that God gave the Scriptures to us for an example, so that we would be able to learn from their failures. We see this in multiple places (e.g. 1 Cor. 10:6, 11; Jude 7; Rom. 15:4). And this means that there are strong elements of continuity, and not just discontinuity. If there is no continuity, there are no lessons.plant-from-bible

The Text:

“Woe unto them that seek deep to hide their counsel from the Lord, And their works are in the dark, And they say, Who seeth us? and who knoweth us?” (Is. 29:15).

Summary of the Text:

There are different kinds of blindness. There is natural blindness—a rock is blind, for example. Rocks can’t see at all. There is unnatural physical blindness—as a man may be blind through no fault of his own. He belongs to a race of seeing creatures, but he cannot see. And then there is the peculiar kind of blindness that believes that the God of all omniscience is blind. This kind of blindness is the result of a judicial stupor—when God strikes a people for their rank hypocrisy.

In just such a context, Israel was blind because they had blinded themselves (v. 9). They were drunk, but not with wine. The Lord had poured a spirit of stupor over their heads (v. 10). What was happening was to them a sealed book, or an unsealed book in the hands of an illiterate (vv. 11-12). The cause of all this was their formalism and hypocrisy (v. 13). God was therefore going to do something amazing (v.14). Then we have our text—woe to those who want to outsmart God (v. 15). Surely, Isaiah says, you have everything inverted—clay does not shape the potter (v. 16). Clay does not have the right, or the power, to do any such thing. Clay that attempts to aspire to the role of potter can only achieve the status of being messed-up clay.

The Father of Lies:

The issue is lies, always lies (John 8:44). In political life, the foundational issue is honesty. What do I mean? If someone were to maintain that God did not know the location of a particular river in Montana, and someone were to contradict him, their resultant debate would not be a debate over geography. We have to recognize that when two armies meet in a particular place, fighting over the control of a continent, the actual turf where they are fighting need not be that important—whether it be Waterloo, or Gettysburg. When the serpent lied to Eve, the death was in the forbidden fruit, but the poison was in the words “hath God said?

“But I fear, lest by any means, as the serpent beguiled Eve through his subtilty, so your minds should be corrupted from the simplicity that is in Christ.” (2 Cor. 11:3).

So for example, if someone were to tell you that Jesus never went to Capernaum, the issue is not how important it was in the abstract for Him to ever visit that place. The issue is what God has told us—whether through conscience, nature, right reason, or Scripture. And the central, foundational warning must be this matter of simple intellectual honesty. As Emerson once put it, “The louder he talked of his honor, the faster we counted our spoons.” When dealing with liars, you must always define your terms. Defining terms is how you count the spoons.

Common Idols

All of our current woes are a function of idolatry. Either we are living under the blessing of the true and living God, or we are living under the faux-blessings of the false gods, faux-blessings that will always reveal their anemic nature at some point. Consider some of the following:

What’s Wrong With Human Rights?

Human rights are given by the god of the system. If the God is the true God, then the rights are true rights. If the gods are false, then the gifts they give (including “rights”) will be false gifts. Moreover, they are false because they will reflect the nature of the giver. In a secular society, where the god is Demos, the people, the gifts will reflect the nature of the giver—and so they will be both sinful and mutable.

For instance, if you have a right to affordable housing, this means that someone else has an obligation to provide you with it. This is the kind of thumb-on-the-scale-cheating that idols do all the time. But when you have the right to speak your mind, no one else need do anything. So always remember that false gods offer a false gospel.

Pseudo-History:

It is a matter of great importance whether Moses or Jeroboam writes the history books. We might be able to agree on the phrase “this is the God who brought you out of the land of Egypt.” The disagreement comes when we examine the referent of “this.” What God are you pointing to?

You have been told, ad nauseam, that the United States was founded as a secular republic, breaking with the older order of Christendom. Secularism, formal religious neutrality, saved us all from endless religious strife and blood-letting. But this is almost entirely false. False gods write false salvation narratives.

American Exceptionalism:

One of the more common idols on the right is the notion of American exceptionalism. False gods offer a false doctrine of election. But look where this hubris has gotten us.

 

What Cultural Engagement Actually Is:

Some Christians run away from culture. This is the separatist move. Others approach culture, hoping for some kind of amalgamation or compromise. This is the syncretistic move. Others go over to the secular culture in order to surrender to it. This is the “convert me” move. The only appropriate option for us as Christians is to recognize the ultimate authority of Christ, and to disciple all the nations, including this one, baptizing and teaching obedience.

Has It Come to That?

We have to choose from one of the two main candidates, it is said. But why? If someone says that we have to vote for Trump because Hillary is far worse, then wouldn’t that require voting for Hillary at some point if she were running against someone far worse?

What do Christians do when there are no elections where they live? Well, they have to trust God. But we don’t want to have to do that. Trust God? Has it come to that?

The post Aeneas, Anarchy, and America #3 appeared first on Blog & Mablog.

Categories: People I don't know

Humility Is Not Lying

Blog & Mablog - Sat, 24/09/2016 - 11:17

Scripture reminds us that we are to think of others in a way that is very difficult to do. “Let nothing be done through strife or vainglory; but in lowliness of mind let each esteem other better than themselves” (Phil. 2:3).exhort

Now the sense of this is not that we are to pretend that others are better than we are in things that we can do and they can’t. If you can play the piano and they cannot, if you know how to do differential equations and they can’t, if you know how to dunk a basketball and they can’t, considering someone else better than yourself does not mean telling yourself lies about it. Nor does it mean telling them lies about it.

Rather, the position of the other is more important to you than your own. You are not competing with them. You are not striving. Remember the first part of the verse. Do nothing from strife or vainglory. If you have a relationship with someone, and in your mind you have a running tally of points, and it is important to you that your number is always higher than theirs, then you have a problem.

So you play the piano better. Fine. Their playing is more important to you than yours is. You run a business better. Fine. Their business matters more to you than your own does. Sometimes competition is unavoidable, just in the nature of the case. They set up shop after you did, and their establishment is right across the street. That is the way it does sometimes. But competition in the ego is never unavoidable. That is what must be laid down. That is what we must consider as cross fodder.

When we strive, we are trying to pick things up with our right hand, instead of receiving them with our left. Are you in any kind of adversarial set up with anyone else in the body, at any level? Then there is a sense in which you must let it go.

The post Humility Is Not Lying appeared first on Blog & Mablog.

Categories: People I don't know

Questions in the Guardian we can answer

Adam Smith Institute - Sat, 24/09/2016 - 09:01

The Guardian asks us:

Do we really want post-Brexit Britain to be the world’s biggest tax haven?

Yes.

Next question?

Or in more detail, yes we do want tax competition. For it is that very competition, as it is in so many other areas of life, which limits the amount that we the people can get shafted. 

We all know very well that a monopoly supplier of beer would be watering that of the workers even as they raised the price. We prosecute people who build cartels for the very same reason - such cooperation between producers means that it is the consumer that is going to get screwed.

Tax competition is exactly the same logic. It's entirely true that there does need to be government - no, we are not anarcho-capitalists around here - and that means there must be tax revenue to pay for it. It is also true that a government is going to be sovereign over its own territory. Which means that the only form of competition we can have here, to protect us against that monopolist problem, is between tax jurisdictions rather than within them.

And thus the joy with which we welcome tax competition and yes, even tax havens. Simply because their existence limits the depredations the governors may make upon the pockets of the populace.

And why shouldn't it be us that leads the world in such matters? We did, after all, rather pioneer these very ideas. Our own Adam Smith leading the way in much of it of course. Starting with that point that it is economic freedom which leads to the enrichment of said populace, competition being the thing which ensures that economic freedom.  

We insist that the bakers and then butchers compete for our custom. Why should that not be true of those who would claim to rule us, those who claim to know how our money should be spent? We might even find that leaving it to fructify in the pockets of the populace provides that optimal solution.

Which is exactly why those who would rule us don't desire the system of competition - and thus exactly why we must have it.

Categories: Current Affairs

And Soon to be Pope

Blog & Mablog - Sat, 24/09/2016 - 04:44

“For we are not just doing battle with the powers of darkness; we are also engaged in mortal conflict with the theology of Madeline Bassett, resident theologian and high priestess of pop evangelicalism” (Writers to Read, p. 57).

The post And Soon to be Pope appeared first on Blog & Mablog.

Categories: People I don't know

Archbishop of Canterbury writes in defense of church schools

Anglican Ink - Sat, 24/09/2016 - 01:16

The CofE has educated millions of Britons alive today, and it has ambitious plans to extend its reach, Archbishop Justin Welby writes in the Times Education Supplement

Woman bishop address Forward in Faith-Wales on living well with diversity

Anglican Ink - Sat, 24/09/2016 - 01:10

At a conference organized by Credo Cymru, the body representing traditionalist beliefs in the Church in Wales, the Bishop of Gloucester, the Rt Revd Rachel Treweek, has spoken about living with div

Islamic extremism infiltrates the police

Christian Concern - Fri, 23/09/2016 - 18:00
In this piece, Christian Concern's Director of Islamic Affairs, Tim Dieppe, discusses news that Islamic extremism has infiltrated the police force.

In this piece, Christian Concern's Director of Islamic Affairs, Tim Dieppe, discusses news that Islamic extremism has infiltrated the police force. "The idea that our counterterrorism division would employ someone sympathetic to the Taliban, and let them continue in that role is shocking and disturbing," he writes.

He states that the "foundation of our society is at stake" and urges believers to pray that police offers would not be intimidated by fears of Islamophobia. 
 

read more

Anglican and Catholic bishops on pilgrimage to Canterbury and Rome

Anglican Ink - Fri, 23/09/2016 - 17:07

Thirty-six IARCCUM Anglican and Catholic bishops, representing 19 different regions where Anglicans and Catholics live side by side in significant number, will meet in Canterbury and Rome for a summit meeting in October of this year. 

An overview of TLS 1.3 and Q&A

CloudFlare - Fri, 23/09/2016 - 17:01

The CloudFlare London office hosts weekly internal Tech Talks (with free lunch picked by the speaker). My recent one was an explanation of the latest version of TLS, 1.3, how it works and why it's faster and safer.

You can watch the complete talk below or just read my summarized transcript.

The Q&A session is open! Send us your questions about TLS 1.3 at tls13@cloudflare.com or leave them in the Disqus comments below and I'll answer them in an upcoming blog post.

.post-content iframe { margin: 0; }

Summarized transcript

TLS 1.2 ECDHE

To understand why TLS 1.3 is awesome, we need to take a step back and look at how TLS 1.2 works. In particular we will look at modern TLS 1.2, the kind that a recent browser would use when connecting to the CloudFlare edge.

TLS 1.2 ECDHE exchange

The client starts by sending a message called the ClientHello that essentially says "hey, I want to speak TLS 1.2, with one of these cipher suites".

The server receives that and answers with a ServerHello that says "sure, let's speak TLS 1.2, and I pick this cipher suite".

Along with that the server sends its key share. The specifics of this key share change based on what cipher suite was selected. When using ECDHE, key shares are mixed with the Elliptic Curve Diffie Hellman algorithm.

The important part to understand is that for the client and server to agree on a cryptographic key, they need to receive each other's portion, or share.

Finally, the server sends the website certificate (signed by the CA) and a signature on portions of ClientHello and ServerHello, including the key share, so that the client knows that those are authentic.

The client receives all that, and then generates its own key share, mixes it with the server key share, and thus generates the encryption keys for the session.

Finally, the client sends the server its key share, enables encryption and sends a Finished message (which is a hash of a transcript of what happened so far). The server does the same: it mixes the key shares to get the key and sends its own Finished message.

At that point we are done, and we can finally send useful data encrypted on the connection.

Notice that this takes two round-trips between the client and the server before the HTTP request can be transferred. And round-trips on the Internet can be slow.

TLS 1.3

Enter TLS 1.3. While TLS 1.0, 1.1 and 1.2 are not that different, 1.3 is a big jump.

Most importantly, establishing a TLS 1.3 connection takes one less round-trip.

TLS 1.3 handshake

In TLS 1.3 a client starts by sending not only the ClientHello and the list of supported ciphers, but it also makes a guess as to which key agreement algorithm the server will choose, and sends a key share for that.

(Note: the video calls the key agreement algorithm "cipher suite". In the meantime the specification has been changed to disjoin supported cipher suites like AES-GCM-SHA256 and supported key agreements like ECDHE P-256.)

And that saves us a round trip, because as soon as the server selects the cipher suite and key agreement algorithm, it's ready to generate the key, as it already has the client key share. So it can switch to encrypted packets one whole round-trip in advance.

So the server sends the ServerHello, its key share, the certificate (now encrypted, since it has a key!), and already the Finished message.

The client receives all that, generates the keys using the key share, checks the certificate and Finished, and it's immediately ready to send the HTTP request, after only one round-trip. Which can be hundreds of milliseconds.

TLS 1.2 resumption

One existing way to speed up TLS connections is called resumption. It's what happens when the client has connected to that server before, and uses what they remember from the last time to cut short the handshake.

TLS 1.2 resumption schema

How this worked in TLS 1.2 is that servers would send the client either a Session ID or a Session Ticket. The former is just a reference number that the server can trace back to a session, while the latter is an encrypted serialized session which allows the server not to keep state.

The next time the client would connect, it would send the Session ID or Ticket in the ClientHello, and the server would go like "hey, I know you, we have agreed on a key already", skip the whole key shares dance, and jump straight to Finished, saving a round-trip.

1.3 0-rtt resumption

So, we have a way to do 1-RTT connections in 1.2 if the client has connected before, which is very common. Then what does 1.3 gain us? When resumption is available, 1.3 allows us to do 0-RTT connections, again saving one round trip and ending up with no round trip at all.

If you have connected to a 1.3 server before you can immediately start sending encrypted data, like an HTTP request, without any round-trip at all, making TLS essentially zero overhead.

1.3 0-rtt resumption schema

When a 1.3 client connects to a 1.3 server they agree on a resumption key (or PSK, pre-shared key), and the server gives the client a Session Ticket that will help it remember it. The Ticket can be an encrypted copy of the PSK—to avoid state—or a reference number.

The next time the client connects, it sends the Session Ticket in the ClientHello and then immediately, without waiting for any round trip, sends the HTTP request encrypted with the PSK. The server figures out the PSK from the Session Ticket and uses that to decrypt the 0-RTT data.

The client also sends a key share, so that client and server can switch to a new fresh key for the actual HTTP response and the rest of the connection.

0-RTT caveats

0-RTT comes with a couple of caveats.

Since the PSK is not agreed upon with a fresh round of Diffie Hellman, it does not provide Forward Secrecy against a compromise of the Session Ticket key. That is, if in a year an attacker somehow obtains the Session Ticket key, it can decrypt the Session Ticket, obtain the PSK and decrypt the 0-RTT data the client sent (but not the rest of the connection).

This is why it's important to rotate often and not persist Session Ticket keys (CloudFlare rotates these keys hourly).

TLS 1.2 has never provided any Forward Secrecy against a compromise of the Session Ticket key at all, so even with 0-RTT 1.3 is an improvement upon 1.2.

0-RTT replay

More problematic are replay attacks.

Since with Session Tickets servers are stateless, they have no way to know if a packet of 0-RTT data was already sent before.

Imagine that the 0-RTT data a client sent is not an HTTP GET ("hey, send me this page") but instead an HTTP POST executing a transaction like "hey, send Filippo 50$". If I'm in the middle I can intercept that ClientHello+0-RTT packet, and then re-send it to the server 100 times. No need to know any key. I now have 5000$.

Every time the server will see a Session Ticket, unwrap it to find the PSK, use the PSK to decrypt the 0-RTT data and find the HTTP POST inside, with no way to know something is fishy.

The solution is that servers must not execute operations that are not idempotent received in 0-RTT data. Instead in those cases they should force the client to perform a full 1-RTT handshake. That protects from replay since each ClientHello and ServerHello come with a Random value and connections have sequence numbers, so there's no way to replay recorded traffic verbatim.

Thankfully, most times the first request a client sends is not a state-changing transaction, but something idempotent like a GET.

removed

simplified

added

TLS 1.3 is not only good for cutting a round-trip. It's also better, more robust crypto all around.

Most importantly, many things were removed. 1.3 marked a shift in the design approach: it used to be the case that the TLS committee would accept any proposal that made sense, and implementations like OpenSSL would add support for it. Think for example Heartbeats, the rarely used feature that cause Heartbleed.

In 1.3, everything was scrutinized for being really necessary and secure, and scrapped otherwise. A lot of things are gone:

We'll go over these in more detail in future blog posts.

Some of these were not necessarily broken by design, but they were dangerous, hard to implement correctly and easy to get wrong. The new excellent trend of TLS 1.3 and cryptography in general is to make mistakes less likely at the design stage, since humans are not perfect.

anti-downgrade

A new version of a protocol obviously can't dictate how older implementations behave and 1.3 can't improve the security of 1.2 systems. So how do you make sure that if tomorrow TLS 1.2 is completely broken, a client and server that both support 1.2 and 1.3 can't be tricked into using 1.2 by a Man in the Middle (MitM)?

A MitM could change the ClientHello to say "I want to talk at most TLS 1.2", and then use whichever attack it discovered to make the 1.2 connection succeed even if it tampered with a piece of the handshake.

1.3 has a clever solution to this: if a 1.3 server has to use 1.2 because it looks like the client doesn't support 1.3, it will "hide a message" in the Server Random value. A real 1.2 will completely ignore it, but a client that supports 1.3 would know to look for it, and would discover that it's being tricked into downgrading to 1.2.

The Server Random is signed with the certificate in 1.2, so it's impossible to fake even if pieces of 1.2 are broken. This is very important because it will allow us to keep supporting 1.2 in the future even if it's found to be weaker, unlike we had to do with SSLv3 and POODLE. With 1.3 we will know for sure that clients that can do any better are not being put at risk, allowing us to make sure the Internet is for Everyone.

solid

So this is TLS 1.3. Meant to be a solid, safe, robust, simple, essential foundation for Internet encryption for the years to come. And it's faster, so that no one will have performance reasons not to implement it.

TLS 1.3 is still a draft and it might change before being finalized, but at CloudFlare we are actively developing a 1.3 stack compatible with current experimental browsers, so everyone can get it today.

github

The TLS 1.3 spec is on GitHub, so anyone can contribute. Just while making the slides for this presentation I noticed I was having a hard time understanding a system because a diagram was missing some details, so I submitted a PR to fix it. How easy is that!?

questions

Like any talk, at the end there's the Q&A. Send your questions to tls13@cloudflare.com or leave them in the Disqus comments below and I'll answer them in an upcoming blog post!

Categories: Technology

Doctors put price on lives of those with Down's Syndrome

Christian Concern - Fri, 23/09/2016 - 17:00
Senior doctors attracted strong criticism this week, after suggesting that the NHS should work out the cost effectiveness of treating those with Down’s Syndrome.

Senior doctors attracted strong criticism this week, after suggesting that the NHS should work out the cost effectiveness of treating those with Down’s Syndrome.

The Royal College of Obstetricians and Gynaecologists raised the prospect in a consultation into a widely-criticised pre-natal test for Down’s, which the NHS is set to approve.

read more

Archbishops of Canterbury and Westminster write to Polish President

Anglican Ink - Fri, 23/09/2016 - 16:51

Letter to Andrzej Duda from Archishop Justin Welby and Cardinal Vincent Nichols on English prejudice against Poles in Britain

NZ Bishops statement on Euthanasia

Anglican Ink - Fri, 23/09/2016 - 16:43

Submission from 9 Anglican bishops to the Health Select Committee on Medically Assisted Dying

NZ church council urges parliament to reject euthanasia

Anglican Ink - Fri, 23/09/2016 - 16:36

In an oral submission to the Health Select Committee this week, three members of the InterChurch Bioethics Council (ICBC) presented a submission to oppose legalising assisted suicide or euthanasia in Aotearoa New Zealand.

"The real work has only just begun"

Christian Concern - Fri, 23/09/2016 - 15:25
Having attended the Wilberforce Academy for the first time in 2015, I knew that if there was an opportunity for me to attend again this year, then I needed to take it.

Immamuel Opara, 31, is a Fitness Instructor and a pro-life activist. Last year was his first time at the Wilberforce Academy, this year he came as part of the members programme. He says:

"The Wilberforce Academy 2016 may have now finished, but the real work has only just begun."

 

read more

"The most challenging... God-honouring course I have ever been on"

Christian Concern - Fri, 23/09/2016 - 15:24
The Wilberforce Academy was a challenging, eye-opening and influential course.

Sarah Halpin, 24, works as the Office Manager at Mosaic Church, Leeds. This year was her first time at the Wilberforce Academy. She says:

"It has been the most challenging, thought-provoking, relationship building and God-honouring course I have ever been on. Definitely one to repeat and recommend."

 

read more

"I feel as though I spent a week with scores of budding William Wilberforces"

Christian Concern - Fri, 23/09/2016 - 14:10
Delegates from this year's Wilberforce Academy have written about their experience and how the week has impacted them.

Dr Frances Rabbitts, 27, is the Managing Editor of Prophecy Today UK. Frances attended the Academy for the first time this year. She says:

"I feel as though I spent a week with scores of budding William Wilberforces – what a privilege!"

 

read more

Friday Quiz: The number 5

The Good Book Company - Fri, 23/09/2016 - 11:43

How did you get on? Join the conversation and comment below. You can also like us on Facebook, follow us on Twitter, subscribe to our YouTube Channel, and download The Good Book Company App straight to your phone or tablet.

Categories: Christian Resources

Pages

Subscribe to oakleys.org.uk aggregator