Blogroll Category: Technology

I read blogs, as well as write one. The 'blogroll' on this site reproduces some posts from some of the people I enjoy reading. There are currently 177 posts from the category 'Technology.'

Disclaimer: Reproducing an article here need not necessarily imply agreement or endorsement!

Three new ways teams are using Cloudflare Access

CloudFlare - Wed, 15/08/2018 - 22:00
Three new ways teams are using Cloudflare Access

Since leaving beta three weeks ago, Cloudflare Access has become our fastest-growing subscription service. Every day, more teams are using Access to leave their VPN behind and connect to applications quickly and securely from anywhere in the world.

We’ve heard from a number of teams about how they’re using Access. Each team has unique needs to consider as they move away from a VPN and to a zero trust model. In a zero trust framework, each request has to prove that a given application should trust its attempt to reach a secure tool. In this post, we’re highlighting some of the solutions that groups are using to transition to Cloudflare Access.

Solution 1: Collaborate with External Partners

Cloudflare Access integrates with popular identity providers (IdPs) so that your team can reach internal applications without adding more credentials. However, teams rarely work in isolation. They frequently rely on external partners who also need to reach shared tools.

How to grant and manage permissions with external partners poses a security risk. Just because you are working with a third-party doesn’t mean they should have credentials to your IdP. They typically need access to a handful of tools, not all of your internal resources.

We’ve heard from Access customers who are increasingly using the One-Time Pin feature to solve this problem. With One-Time Pin, you can grant access to third-party users without adding them to your IdP. Your internal team will continue to use their IdP credentials to authenticate while external users input their email address and receive a single-use code in their inbox. Here’s how your team can set this up:

Three new ways teams are using Cloudflare Access

In this example, we have Okta configured as our IdP. We have also enabled One-Time Pin as an additional login method.

Three new ways teams are using Cloudflare Access

Now that both login options are available, we can decide who should be able to reach our application. We’ll start by creating a new Access Group. An Access Group defines a set of users. We’ll name the group “Third-Party Partners” and include the email addresses of the individuals who need permission. Once the list is complete, the group can be saved.

Since Access Groups can be reused across policies, adding or removing a user from this list will apply to all policies that use the “Third-Party Partners” group.

Three new ways teams are using Cloudflare Access

Now that we have saved an Access Group, we can return to the administration panel and build a policy based on that group membership. First, we need to make sure our internal team can reach the application. To do so, we’ll create an Allow decision and include emails ending in our domain. Since that domain is tied to our Okta account, our internal team can continue to use Okta to reach the tool.

Next, we’ll need a second Include rule in the same policy for the external teams. For this rule, select “Access Groups” from the drop-down options. Once selected, we can pick the “Third-Party Partners” group that was saved in the previous step. This will allow any user who is a member of that group to reach the application.

Three new ways teams are using Cloudflare Access

Now when users attempt to reach the application, they are presented with two options. The internal team can continue to login with Okta. Third-party partners can instead select the One-Time Pin option.

Three new ways teams are using Cloudflare Access

When they choose One-Time Pin, they will be prompted to input their email address. Access will send a one-time code to their inbox. If they are an authorized user, as defined by the Access Group list, they can follow a link in that email or input the code to reach the application.

Solution 2: Require a Specific Network

For some applications, you want to ensure that your end users are both part of an approved list and originate from a known connection, like a secure office network. Building a rule with this requirement adds an extra layer of scrutiny to each request. Teams are using Access to enforce more comprehensive requirements like this one by creating policies with multiple rules. You can set this up for a specific application by creating a policy like the one below.

Three new ways teams are using Cloudflare Access

First, create a new Access Group. List the addresses or ranges you want to require. When adding multiple, use the Include rule, which means users must originate from one of the addresses in the list. You can give the group a title like "Office Networks" and save it.

Three new ways teams are using Cloudflare Access

Next, create a new policy. First, allow users to authenticate with their IdP credentials by including your team’s email domain or the group name from your IdP. Second, add a rule to require that requests originate from the network(s) you defined in your Access Group.

In this example, users who want to reach the site would first need to authenticate with the IdP you have configured. In addition, Access will check to make sure their request is coming from the IP range you configured in the second rule underneath the “Require” line.

Solution 3: Reach On-Premise Applications with Argo Tunnel

Some applications are too sensitive to expose to the public internet through firewall ports and access control lists (ACLs). At first glance, these tools seem doomed to live on-premise and require a VPN when your team members are away from the office.

Cloudflare Access can still help. When you combine Access with Cloudflare Argo Tunnel, you can avoid the hassle of a VPN while making your on-premise applications available to end users through secure connections to the Internet. Argo Tunnel securely exposes your web servers to the Internet without opening up firewall ports or requiring ACL configuration. Argo Tunnel ensures that requests route through Cloudflare before reaching the web server.

To configure Argo Tunnel, you’ll first need to create a zone in Cloudflare to serve as the hostname for your web server. Argo Tunnel creates a DNS entry for that hostname so that visitors can find it. Next, lock down that hostname with a new Access policy. Once you’re ready, you can proceed to install Argo Tunnel on your web server by following the instructions here.

With Access and Argo Tunnel, teams are making their on-premise applications feel and operate like SaaS products.

What's next?

We’re always excited to hear about how customers use our products. The feedback helps us iterate and build better solutions for your teams. We’d like to thank our Access beta users, as well as early adopters, for their input. We’re excited to continue to improving Access so that your team can continue transitioning away from your VPN.

Categories: Technology

EasyApache 4 updated

CloudLinux - Wed, 15/08/2018 - 16:13

A new updated ea-apache24-mod_suphp package for EasyApache 4 is now available for download from our production repository.

We strongly recommend this update for users with suPHP handler setup for Apache.


  • EA4D-97: reverted the recent cPanel fixes (EA-7525 - targetMode in 0008-Support-phprc_paths-section-in-suphp.conf.patch checked before set). Corresponding cPanel ticket reference is EA-7779.

Update command:

yum update ea-apache24-mod_suphp
Categories: Technology

File (Field) Paths - Critical - Remote Code Execution - SA-CONTRIB-2018-056

Drupal Contrib Security - Wed, 15/08/2018 - 13:32
Project: File (Field) PathsDate: 2018-August-15Security risk: Critical 15∕25 AC:Basic/A:User/CI:Some/II:All/E:Theoretical/TD:DefaultVulnerability: Remote Code ExecutionDescription: 

This module enables you to automatically sort and rename your uploaded files using token based replacement patterns to maintain a nice clean filesystem.

The module doesn't sufficiently sanitize the path while a new file is uploading, allowing a remote attacker to execute arbitrary PHP code.

This vulnerability is mitigated by the fact that an attacker must have access to a form containing a widget processed by this module.


Install the latest version:

Reported By: Fixed By: Coordinated By: 
Categories: Technology

EasyApache 4 updated

CloudLinux - Wed, 15/08/2018 - 10:55

New updated EasyApache packages are now available for download from our production repository.


ea-cpanel-tools 1.0-19.cloudlinux

  • EA-7549: if the PHP config file is a symlink and the force option is used, remove the symlink so the file can be written.

ea-documentroot 1.0-5.

  • updated footer logo to SVG;
  • updated index.html to set Cache-control to no-cache.

ea-nghttp2 1.32.0-1.

  • EA-7754: updated from 1.20.0 to 1.32.0.

ea-apache24-mod_suphp 0.7.2-25.cloudlinux

  • EA-7525: fixed 0008-Support-phprc_paths-section-in-suphp.conf.patch, targetMode used before initialized.

ea-php51-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php52-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php53-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php54-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php55-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php56-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php70-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php71-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php72-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

Update command:

yum update ea-cpanel-tools ea-documentroot ea-nghttp2 ea-apache24-mod_suphp ea-php51-php-ioncube10 ea-php52-php-ioncube10 ea-php53-php-ioncube10 ea-php54-php-ioncube10 ea-php55-php-ioncube10 ea-php56-php-ioncube10 ea-php70-php-ioncube10 ea-php71-php-ioncube10 ea-php72-php-ioncube10
Categories: Technology

Beta: MariaDB for MySQL Governor updated

CloudLinux - Tue, 14/08/2018 - 11:29

A new updated cl-MariaDB package for MySQL Governor is now available for download from our updates-testing repository.


cl-MariaDB100 10.0.36-1

  • MYSQLG-290: updated up to version 10.0.36.

To update MariaDB 10.0 run:

# yum update cl-MariaDB-meta-client cl-MariaDB-meta-devel cl-MariaDB-meta cl-MariaDB* governor-mysql --enablerepo=cloudlinux-updates-testing #restart mysql #restart governor-mysql

To install MariaDB 10.0.36 on a new server run:

# yum install governor-mysql # /usr/share/lve/dbgovernor/ --mysql-version=mariadb100 # /usr/share/lve/dbgovernor/ --install-beta
Categories: Technology

SunshinePHP 2019 CFP Started

PHP - Tue, 14/08/2018 - 01:00
Categories: Technology

Beta: MySQL57 for MySQL Governor updated

CloudLinux - Mon, 13/08/2018 - 11:41

A new updated MySQL57 package for MySQL Governor is now available for download from our updates-testing repository.


  • updated up to version 5.7.23.

To update MySQL 5.7 run:

# yum update cl-MySQL-meta-client cl-MySQL-meta-devel cl-MySQL-meta cl-MySQL* governor-mysql --enablerepo=cloudlinux-updates-testing #restart mysql #restart governor-mysql

To install MySQL 5.7.23 on a new server run:

# yum install governor-mysql # /usr/share/lve/dbgovernor/ --mysql-version=mysql57 # /usr/share/lve/dbgovernor/ --install-beta
Categories: Technology

Beta: MySQL56 for MySQL Governor updated

CloudLinux - Mon, 13/08/2018 - 11:36

New updated MySQL56 package for MySQL Governor is now available for download from our updates-testing repository.


  • updated up to version 5.6.41.

To update MySQL 5.6 run:

# yum update cl-MySQL-meta-client cl-MySQL-meta-devel cl-MySQL-meta cl-MySQL* governor-mysql --enablerepo=cloudlinux-updates-testing #restart mysql #restart governor-mysql

To install MySQL 5.6.41 on a new server run:

# yum install governor-mysql # /usr/share/lve/dbgovernor/ --mysql-version=mysql56 # /usr/share/lve/dbgovernor/ --install-beta
Categories: Technology

Imunify360 3.4.4 beta is here

CloudLinux - Mon, 13/08/2018 - 11:21

We are pleased to announce that a new updated Imunify360 Beta version 3.4.4 is now available. This latest version includes some bug fixes and new features.


  • DEF-5477: removed remaining references to imunify360-captcha;
  • DEF-5579: do not spam log with scan scheme init message;
  • DEF-5615: fixed KeyError: 'inotify';
  • DEF-5660: options for php_i360 in the CageFS environment.

New features

  • DEF-5623: added Proactive Defense support for Plesk (UI);
  • DEF-5649: enabled end-user UI for Resellers in cPanel and Plesk.

To install the new Imunify360 Beta version 3.4.4, please follow the instructions in the documentation.

The upgrading is available starting with version 2.0-19.

To upgrade Imunify360 run the command:

yum update imunify360-firewall --enablerepo=imunify360-testing

More information on Imunify360 can be found here.

Categories: Technology

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

CloudFlare - Sat, 11/08/2018 - 00:00
A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

For the last five years, the Internet Engineering Task Force (IETF), the standards body that defines internet protocols, has been working on standardizing the latest version of one of its most important security protocols: Transport Layer Security (TLS). TLS is used to secure the web (and much more!), providing encryption and ensuring the authenticity of every HTTPS website and API. The latest version of TLS, TLS 1.3 (RFC 8446) was published today. It is the first major overhaul of the protocol, bringing significant security and performance improvements. This article provides a deep dive into the changes introduced in TLS 1.3 and its impact on the future of internet security.

An evolution

One major way Cloudflare provides security is by supporting HTTPS for websites and web services such as APIs. With HTTPS (the “S” stands for secure) the communication between your browser and the server travels over an encrypted and authenticated channel. Serving your content over HTTPS instead of HTTP provides confidence to the visitor that the content they see is presented by the legitimate content owner and that the communication is safe from eavesdropping. This is a big deal in a world where online privacy is more important than ever.

The machinery under the hood that makes HTTPS secure is a protocol called TLS. It has its roots in a protocol called Secure Sockets Layer (SSL) developed in the mid-nineties at Netscape. By the end of the 1990s, Netscape handed SSL over to the IETF, who renamed it TLS and have been the stewards of the protocol ever since. Many people still refer to web encryption as SSL, even though the vast majority of services have switched over to supporting TLS only. The term SSL continues to have popular appeal and Cloudflare has kept the term alive through product names like Keyless SSL and Universal SSL.

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

In the IETF, protocols are called RFCs. TLS 1.0 was RFC 2246, TLS 1.1 was RFC 4346, and TLS 1.2 was RFC 5246. Today, TLS 1.3 was published as RFC 8446. RFCs are generally published in order, keeping 46 as part of the RFC number is a nice touch.

TLS 1.2 wears parachute pants and shoulder pads

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)
MC Hammer, like SSL, was popular in the 90s

Over the last few years, TLS has seen its fair share of problems. First of all, there have been problems with the code that implements TLS, including Heartbleed, BERserk, goto fail;, and more. These issues are not fundamental to the protocol and mostly resulted from a lack of testing. Tools like TLS Attacker and Project Wycheproof have helped improve the robustness of TLS implementation, but the more challenging problems faced by TLS have had to do with the protocol itself.

TLS was designed by engineers using tools from mathematicians. Many of the early design decisions from the days of SSL were made using heuristics and an incomplete understanding of how to design robust security protocols. That said, this isn’t the fault of the protocol designers (Paul Kocher, Phil Karlton, Alan Freier, Tim Dierks, Christopher Allen and others), as the entire industry was still learning how to do this properly. When TLS was designed, formal papers on the design of secure authentication protocols like Hugo Krawczyk’s landmark SIGMA paper were still years away. TLS was 90s crypto: It meant well and seemed cool at the time, but the modern cryptographer’s design palette has moved on.

Many of the design flaws were discovered using formal verification. Academics attempted to prove certain security properties of TLS, but instead found counter-examples that were turned into real vulnerabilities. These weaknesses range from the purely theoretical (SLOTH and CurveSwap), to feasible for highly resourced attackers (WeakDH, LogJam, FREAK, SWEET32), to practical and dangerous (POODLE, ROBOT).

TLS 1.2 is slow

Encryption has always been important online, but historically it was only used for things like logging in or sending credit card information, leaving most other data exposed. There has been a major trend in the last few years towards using HTTPS for all traffic on the Internet. This has the positive effect of protecting more of what we do online from eavesdroppers and injection attacks, but has the downside that new connections get a bit slower.

For a browser and web server to agree on a key, they need to exchange cryptographic data. The exchange, called the “handshake” in TLS, has remained largely unchanged since TLS was standardized in 1999. The handshake requires two additional round-trips between the browser and the server before encrypted data can be sent (or one when resuming a previous connection). The additional cost of the TLS handshake for HTTPS results in a noticeable hit to latency compared to an HTTP alone. This additional delay can negatively impact performance-focused applications.

Defining TLS 1.3

Unsatisfied with the outdated design of TLS 1.2 and two-round-trip overhead, the IETF set about defining a new version of TLS. In August 2013, Eric Rescorla laid out a wishlist of features for the new protocol:

After some debate, it was decided that this new version of TLS was to be called TLS 1.3. The main issues that drove the design of TLS 1.3 were mostly the same as those presented five years ago:

  • reducing handshake latency
  • encrypting more of the handshake
  • improving resiliency to cross-protocol attacks
  • removing legacy features

The specification was shaped by volunteers through an open design process, and after four years of diligent work and vigorous debate, TLS 1.3 is now in its final form: RFC 8446. As adoption increases, the new protocol will make the internet both faster and more secure.

In this blog post I will focus on the two main advantages TLS 1.3 has over previous versions: security and performance.

Trimming the hedges

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)
Creative Commons Attribution-Share Alike 3.0

In the last two decades, we as a society have learned a lot about how to write secure cryptographic protocols. The parade of cleverly-named attacks from POODLE to Lucky13 to SLOTH to LogJam showed that even TLS 1.2 contains antiquated ideas from the early days of cryptographic design. One of the design goals of TLS 1.3 was to correct previous mistakes by removing potentially dangerous design elements.

Fixing key exchange

TLS is a so-called “hybrid” cryptosystem. This means it uses both symmetric key cryptography (encryption and decryption keys are the same) and public key cryptography (encryption and decryption keys are different). Hybrid schemes are the predominant form of encryption used on the Internet and are used in SSH, IPsec, Signal, WireGuard and other protocols. In hybrid cryptosystems, public key cryptography is used to establish a shared secret between both parties, and the shared secret is used to create symmetric keys that can be used to encrypt the data exchanged.

As a rule of thumb, public key crypto is slow and expensive (microseconds to milliseconds per operation) and symmetric key crypto is fast and cheap (nanoseconds per operation). Hybrid encryption schemes let you send a lot of encrypted data with very little overhead by only doing the expensive part once. Much of the work in TLS 1.3 has been about improving the part of the handshake, where public keys are used to establish symmetric keys.

RSA key exchange

The public key portion of TLS is about establishing a shared secret. There are two main ways of doing this with public key cryptography. The simpler way is with public-key encryption: one party encrypts the shared secret with the other party’s public key and sends it along. The other party then uses its private key to decrypt the shared secret and ... voila! They both share the same secret. This technique was discovered in 1977 by Rivest, Shamir and Adelman and is called RSA key exchange. In TLS’s RSA key exchange, the shared secret is decided by the client, who then encrypts it to the server’s public key (extracted from the certificate) and sends it to the server.

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

The other form of key exchange available in TLS is based on another form of public-key cryptography, invented by Diffie and Hellman in 1976, so-called Diffie-Hellman key agreement. In Diffie-Hellman, the client and server both start by creating a public-private key pair. They then send the public portion of their key share to the other party. When each party receives the public key share of the other, they combine it with their own private key and end up with the same value: the pre-master secret. The server then uses a digital signature to ensure the exchange hasn’t been tampered with. This key exchange is called “ephemeral” if the client and server both choose a new key pair for every exchange.

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

Both modes result in the client and server having a shared secret, but RSA mode has a serious downside: it’s not forward secret. That means that if someone records the encrypted conversation and then gets ahold of the RSA private key of the server, they can decrypt the conversation. This even applies if the conversation was recorded and the key is obtained some time well into the future. In a world where national governments are recording encrypted conversations and using exploits like Heartbleed to steal private keys, this is a realistic threat.

RSA key exchange has been problematic for some time, and not just because it’s not forward-secret. It’s also notoriously difficult to do correctly. In 1998, Daniel Bleichenbacher discovered a vulnerability in the way RSA encryption was done in SSL and created what’s called the “million-message attack,” which allows an attacker to perform an RSA private key operation with a server’s private key by sending a million or so well-crafted messages and looking for differences in the error codes returned. The attack has been refined over the years and in some cases only requires thousands of messages, making it feasible to do from a laptop. It was recently discovered that major websites (including were also vulnerable to a variant of Bleichenbacher’s attack called the ROBOT attack as recently as 2017.

To reduce the risks caused by non-forward secret connections and million-message attacks, RSA encryption was removed from TLS 1.3, leaving ephemeral Diffie-Hellman as the only key exchange mechanism. Removing RSA key exchange brings other advantages, as we will discuss in the performance section below.

Diffie-Hellman named groups

When it comes to cryptography, giving too many options leads to the wrong option being chosen. This principle is most evident when it comes to choosing Diffie-Hellman parameters. In previous versions of TLS, the choice of the Diffie-Hellman parameters was up to the participants. This resulted in some implementations choosing incorrectly, resulting in vulnerable implementations being deployed. TLS 1.3 takes this choice away.

Diffie-Hellman is a powerful tool, but not all Diffie-Hellman parameters are “safe” to use. The security of Diffie-Hellman depends on the difficulty of a specific mathematical problem called the discrete logarithm problem. If you can solve the discrete logarithm problem for a set of parameters, you can extract the private key and break the security of the protocol. Generally speaking, the bigger the numbers used, the harder it is to solve the discrete logarithm problem. So if you choose small DH parameters, you’re in trouble.

The LogJam and WeakDH attacks of 2015 showed that many TLS servers could be tricked into using small numbers for Diffie-Hellman, allowing an attacker to break the security of the protocol and decrypt conversations.

Diffie-Hellman also requires the parameters to have certain other mathematical properties. In 2016, Antonio Sanso found an issue in OpenSSL where parameters were chosen that lacked the right mathematical properties, resulting in another vulnerability.

TLS 1.3 takes the opinionated route, restricting the Diffie-Hellman parameters to ones that are known to be secure. However, it still leaves several options; permitting only one option makes it difficult to update TLS in case these parameters are found to be insecure some time in the future.

Fixing ciphers

The other half of a hybrid crypto scheme is the actual encryption of data. This is done by combining an authentication code and a symmetric cipher for which each party knows the key. As I’ll describe, there are many ways to encrypt data, most of which are wrong.

CBC mode ciphers

In the last section we described TLS as a hybrid encryption scheme, with a public key part and a symmetric key part. The public key part is not the only one that has caused trouble over the years. The symmetric key portion has also had its fair share of issues. In any secure communication scheme, you need both encryption (to keep things private) and integrity (to make sure people don’t modify, add, or delete pieces of the conversation). Symmetric key encryption is used to provide both encryption and integrity, but in TLS 1.2 and earlier, these two pieces were combined in the wrong way, leading to security vulnerabilities.

An algorithm that performs symmetric encryption and decryption is called a symmetric cipher. Symmetric ciphers usually come in two main forms: block ciphers and stream ciphers.

A stream cipher takes a fixed-size key and uses it to create a stream of pseudo-random data of arbitrary length, called a key stream. To encrypt with a stream cipher, you take your message and combine it with the key stream by XORing each bit of the key stream with the corresponding bit of your message.. To decrypt, you take the encrypted message and XOR it with the key stream. Examples of pure stream ciphers are RC4 and ChaCha20. Stream ciphers are popular because they’re simple to implement and fast in software.

A block cipher is different than a stream cipher because it only encrypts fixed-sized messages. If you want to encrypt a message that is shorter or longer than the block size, you have to do a bit of work. For shorter messages, you have to add some extra data to the end of the message. For longer messages, you can either split your message up into blocks the cipher can encrypt and then use a block cipher mode to combine the pieces together somehow. Alternatively, you can turn your block cipher into a stream cipher by encrypting a sequence of counters with a block cipher and using that as the stream. This is called “counter mode”. One popular way of encrypting arbitrary length data with a block cipher is a mode called cipher block chaining (CBC).

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)
A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

In order to prevent people from tampering with data, encryption is not enough. Data also needs to be integrity-protected. For CBC-mode ciphers, this is done using something called a message-authentication code (MAC), which is like a fancy checksum with a key. Cryptographically strong MACs have the property that finding a MAC value that matches an input is practically impossible unless you know the secret key. There are two ways to combine MACs and CBC-mode ciphers. Either you encrypt first and then MAC the ciphertext, or you MAC the plaintext first and then encrypt the whole thing. In TLS, they chose the latter, MAC-then-Encrypt, which turned out to be the wrong choice.

You can blame this choice for BEAST, as well as a slew of padding oracle vulnerabilities such as Lucky 13 and Lucky Microseconds. Read my previous post on the subject for a comprehensive explanation of these flaws. The interaction between CBC mode and padding was also the cause of the widely publicized POODLE vulnerability in SSLv3 and some implementations of TLS.

RC4 is a classic stream cipher designed by Ron Rivest (the “R” of RSA) that was broadly supported since the early days of TLS. In 2013, it was found to have measurable biases that could be leveraged to allow attackers to decrypt messages.

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

In TLS 1.3, all the troublesome ciphers and cipher modes have been removed. You can no longer use CBC-mode ciphers or insecure stream ciphers such as RC4. The only type of symmetric crypto allowed in TLS 1.3 is a new construction called AEAD (authenticated encryption with additional data), which combines encryption and integrity into one seamless operation.

Fixing digital signatures

Another important part of TLS is authentication. In every connection, the server authenticates itself to the client using a digital certificate, which has a public key. In RSA-encryption mode, the server proves its ownership of the private key by decrypting the pre-master secret and computing a MAC over the transcript of the conversation. In Diffie-Hellman mode, the server proves ownership of the private key using a digital signature. If you’ve been following this blog post so far, it should be easy to guess that this was done incorrectly too.


Daniel Bleichenbacher has made a living identifying problems with RSA in TLS. In 2006, he devised a pen-and-paper attack against RSA signatures as used in TLS. It was later discovered that major TLS implemenations including those of NSS and OpenSSL were vulnerable to this attack. This issue again had to do with how difficult it is to implement padding correctly, in this case, the PKCS#1 v1.5 padding used in RSA signatures. In TLS 1.3, PKCS#1 v1.5 is removed in favor of the newer design RSA-PSS.

Signing the entire transcript

We described earlier how the server uses a digital signature to prove that the key exchange hasn’t been tampered with. In TLS 1.2 and earlier, the server’s signature only covers part of the handshake. The other parts of the handshake, specifically the parts that are used to negotiate which symmetric cipher to use, are not signed by the private key. Instead, a symmetric MAC is used to ensure that the handshake was not tampered with. This oversight resulted in a number of high-profile vulnerabilities (FREAK, LogJam, etc.). In TLS 1.3 these are prevented because the server signs the entire handshake transcript.

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

The FREAK, LogJam and CurveSwap attacks took advantage of two things:

  1. the fact that intentionally weak ciphers from the 1990s (called export ciphers) were still supported in many browsers and servers, and
  2. the fact that the part of the handshake used to negotiate which cipher was used was not digitally signed.

The “man-in-the-middle” attacker can swap out the supported ciphers (or supported groups, or supported curves) from the client with an easily crackable choice that the server supports. They then break the key and forge two finished messages to make both parties think they’ve agreed on a transcript.

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

These attacks are called downgrade attacks, and they allow attackers to force two participants to use the weakest cipher supported by both parties, even if more secure ciphers are supported. In this style of attack, the perpetrator sits in the middle of the handshake and changes the list of supported ciphers advertised from the client to the server to only include weak export ciphers. The server then chooses one of the weak ciphers, and the attacker figures out the key with a brute-force attack, allowing the attacker to forge the MACs on the handshake. In TLS 1.3, this type of downgrade attack is impossible because the server now signs the entire handshake, including the cipher negotiation.

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

Better living through simplification

TLS 1.3 is a much more elegant and secure protocol with the removal of the insecure features listed above. This hedge-trimming allowed the protocol to be simplified in ways that make it easier to understand, and faster.

No more take-out menu

In previous versions of TLS, the main negotiation mechanism was the ciphersuite. A ciphersuite encompassed almost everything that could be negotiated about a connection:

  • type of certificates supported
  • hash function used for deriving keys (e.g., SHA1, SHA256, ...)
  • MAC function (e.g., HMAC with SHA1, SHA256, …)
  • key exchange algorithm (e.g., RSA, ECDHE, …)
  • cipher (e.g., AES, RC4, ...)
  • cipher mode, if applicable (e.g., CBC)

Ciphersuites in previous versions of TLS had grown into monstrously large alphabet soups. Examples of commonly used cipher suites are: DHE-RC4-MD5 or ECDHE-ECDSA-AES-GCM-SHA256. Each ciphersuite was represented by a code point in a table maintained by an organization called the Internet Assigned Numbers Authority (IANA). Every time a new cipher was introduced, a new set of combinations needed to be added to the list. This resulted in a combinatorial explosion of code points representing every valid choice of these parameters. It had become a bit of a mess.

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)TLS 1.2
A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

TLS 1.3

TLS 1.3 removes many of these legacy features, allowing for a clean split between three orthogonal negotiations:
  • Cipher + HKDF Hash
  • Key Exchange
  • Signature Algorithm

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

This simplified cipher suite negotiation and radically reduced set of negotiation parameters opens up a new possibility. This possibility enables the TLS 1.3 handshake latency to drop from two round-trips to only one round-trip, providing the performance boost that will ensure that TLS 1.3 will be popular and widely adopted.


When establishing a new connection to a server that you haven’t seen before, it takes two round-trips before data can be sent on the connection. This is not particularly noticeable in locations where the server and client are geographically close to each other, but it can make a big difference on mobile networks where latency can be as high as 200ms, an amount that is noticeable for humans.

1-RTT mode

TLS 1.3 now has a radically simpler cipher negotiation model and a reduced set of key agreement options (no RSA, no user-defined DH parameters). This means that every connection will use a DH-based key agreement and the parameters supported by the server are likely easy to guess (ECDHE with X25519 or P-256). Because of this limited set of choices, the client can simply choose to send DH key shares in the first message instead of waiting until the server has confirmed which key shares it is willing to support. That way, the server can learn the shared secret and send encrypted data one round trip earlier. Chrome’s implementation of TLS 1.3, for example, sends an X25519 keyshare in the first message to the server.

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)
A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

In the rare situation that the server does not support one of the key shares sent by the client, the server can send a new message, the HelloRetryRequest, to let the client know which groups it supports. Because the list has been trimmed down so much, this is not expected to be a common occurrence.

0-RTT resumption

A further optimization was inspired by the QUIC protocol. It lets clients send encrypted data in their first message to the server, resulting in no additional latency cost compared to unencrypted HTTP. This is a big deal, and once TLS 1.3 is widely deployed, the encrypted web is sure to feel much snappier than before.

In TLS 1.2, there are two ways to resume a connection, session ids and session tickets. In TLS 1.3 these are combined to form a new mode called PSK (pre-shared key) resumption. The idea is that after a session is established, the client and server can derive a shared secret called the “resumption master secret”. This can either be stored on the server with an id (session id style) or encrypted by a key known only to the server (session ticket style). This session ticket is sent to the client and redeemed when resuming a connection.

For resumed connections, both parties share a resumption master secret so key exchange is not necessary except for providing forward secrecy. The next time the client connects to the server, it can take the secret from the previous session and use it to encrypt application data to send to the server, along with the session ticket. Something as amazing as sending encrypted data on the first flight does come with its downfalls.


There is no interactivity in 0-RTT data. It’s sent by the client, and consumed by the server without any interactions. This is great for performance, but comes at a cost: replayability. If an attacker captures a 0-RTT packet that was sent to server, they can replay it and there’s a chance that the server will accept it as valid. This can have interesting negative consequences.

A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)

An example of dangerous replayed data is anything that changes state on the server. If you increment a counter, perform a database transaction, or do anything that has a permanent effect, it’s risky to put it in 0-RTT data.

As a client, you can try to protect against this by only putting “safe” requests into the 0-RTT data. In this context, “safe” means that the request won’t change server state. In HTTP, different methods are supposed to have different semantics. HTTP GET requests are supposed to be safe, so a browser can usually protect HTTPS servers against replay attacks by only sending GET requests in 0-RTT. Since most page loads start with a GET of “/” this results in faster page load time.

Problems start to happen when data sent in 0-RTT are used for state-changing requests. To help prevent against this failure case, TLS 1.3 also includes the time elapsed value in the session ticket. If this diverges too much, the client is either approaching the speed of light, or the value has been replayed. In either case, it’s prudent for the server to reject the 0-RTT data.

For more details about 0-RTT, and the improvements to session resumption in TLS 1.3, check out this previous blog post.


TLS 1.3 was a radical departure from TLS 1.2 and earlier, but in order to be deployed widely, it has to be backwards compatible with existing software. One of the reasons TLS 1.3 has taken so long to go from draft to final publication was the fact that some existing software (namely middleboxes) wasn’t playing nicely with the new changes. Even minor changes to the TLS 1.3 protocol that were visible on the wire (such as eliminating the redundant ChangeCipherSpec message, bumping the version from 0x0303 to 0x0304) ended up causing connection issues for some people.

Despite the fact that future flexibility was built into the TLS spec, some implementations made incorrect assumptions about how to handle future TLS versions. The phenomenon responsible for this change is called ossification and I explore it more fully in the context of TLS in my previous post about why TLS 1.3 isn’t deployed yet. To accommodate these changes, TLS 1.3 was modified to look a lot like TLS 1.2 session resumption (at least on the wire). This resulted in a much more functional, but less aesthetically pleasing protocol. This is the price you pay for upgrading one of the most widely deployed protocols online.


TLS 1.3 is a modern security protocol built with modern tools like formal analysis that retains its backwards compatibility. It has been tested widely and iterated upon using real world deployment data. It’s a cleaner, faster, and more secure protocol ready to become the de facto two-party encryption protocol online. TLS 1.3 is enabled by default for all Cloudflare customers.

Publishing TLS 1.3 is a huge accomplishment. It is one the best recent examples of how it is possible to take 20 years of deployed legacy code and change it on the fly, resulting in a better internet for everyone. TLS 1.3 has been debated and analyzed for the last three years and it’s now ready for prime time. Welcome, RFC 8446.

Categories: Technology

Northeast PHP Boston 2018

PHP - Fri, 10/08/2018 - 02:41
Categories: Technology

Optimising Caching on Pwned Passwords (with Workers)

CloudFlare - Thu, 09/08/2018 - 16:42
Optimising Caching on Pwned Passwords (with Workers)

In February, Troy Hunt unveiled Pwned Passwords v2. Containing over half a billion real world leaked passwords, this database provides a vital tool for correcting the course of how the industry combats modern threats against password security.

In supporting this project; I built a k-Anonymity model to add a layer of security to performed queries. This model allows for enhanced caching by mapping multiple leaked password hashes to a single hash prefix and additionally being performed in a deterministic HTTP-friendly way (which allows caching whereas other implementations of Private Set Intersection require a degree of randomness).

Since launch, PwnedPasswords, using this anonymity model and delivered by Cloudflare, has been implemented in a widespread way across a wide variety of platforms - from site like EVE Online and Kogan to tools like 1Password and Okta's PassProtect. The anonymity model is also used by Firefox Monitor when checking if an email is in a data breach.

Since it has been adopted, Troy has tweeted out about the high cache hit ratio; and people have been asking me about my "secret ways" of gaining such a high cache hit ratio. Over time I touched various pieces of Cloudflare's caching systems; in late 2016 I worked to bring Bypass Cache on Cookie functionality to our self-service Business plan users and wrestled with cache implications of CSRF tokens - however Pwned Passwords was far more fun to help show the power of Cloudflare's cache functionality from the perspective of a user.

Looks like Pwned Passwords traffic has started to double over the norm, trending around 8M requests a day now. @IcyApril made a cache change to improve stability but reduce hit ratio around the 10th, but that's improving again now with higher volumes (94% for the last week).

— Troy Hunt (@troyhunt) June 25, 2018

Will @IcyApril secret ways ever be released?!

— Neal (@tun35) May 7, 2018

It is worth noting that PwnedPasswords is not like a typical website in terms of caching - it contains 16^5 possible API queries (any possible form of five hexadecimal charecters, in total over a million possible queries) in order to guarantee k-Anonymity in the API. Whilst the API guarantees k-Anonymity, it does not guarantee l-Diversity, meaning individual queries can occur more than others.

For ordinary websites, with less assets, the cache hit ratio can be far greater. An example of this is another site Troy set-up using our barebones free plan; by simply configuring a Page Rule with the Cache Everything option (and setting an Edge Cache TTL option, should the Cache-Control headers from your origin not do so), you are able to cache static HTML easily.

When I've written about really high cache-hit ratios on @haveibeenpwned courtesy of @Cloudflare, some people have suggested it's due to higher-level plans. Here's running on the *free* plan: 99.0% cache hit ratio on requests and 99.5% on bandwidth. Free!

— Troy Hunt (@troyhunt) July 31, 2018 Origin Headers

Indeed, the fact the queries are usually API queries makes a substantial difference. When optimising caching, the most important thing to look for is instances where the same cache asset is stored multiple times for different cache keys; for some assets this may involve selectively ignoring query strings for cache purposes, but for APIs the devil is more in the detail.

When a HTTP request is made from a JavaScript asset (as is done when PwnedPasswords is directly implemented in login forms) - the site will also send an Origin header to indicate where a fetch originates from.

When you make a search on, there's a bit of JavaScript which takes the password and applies the k-Anonymity model by SHA-1 hashing the password and truncating the hash to the first five charecters and sending that request off to (then performing a check to see if any of the contained suffixes are in the response).

In the headers of this request to, you can see the request contains an Origin header of the querying site.

Optimising Caching on Pwned Passwords (with Workers)

This header is often useful for mitigating Cross-Site Request Forgery (CSRF) vulnerabilities by only allowing certain Origins to make HTTP requests using Cross-Origin Resource Sharing (CORS).

In the context of an API, this does not nessecarily make sense where there is no state (i.e. cookies). However, Cloudflare's default Cache Key contains this header for those who wish to use it. This means, Cloudflare will store a new cached copy of the asset whenever a different Origin header is present. Whilst this is ordinarily not a problem (most sites have one Origin header, or just a handful when using CORS), PwnedPasswords has Origin headers coming from websites all over the internet.

As Pwned Passwords will always respond with the same for a given request, regardless of the Origin header - we are able to remove this header from the Cache Key using our Custom Cache Key functionality.

Incidently, JavaScript CDNs will frequently be requested to fetch assets as sub-resources from another JavaScript asset - removing the Origin header from their Cache Key can have similar benefits:

Just applied some @Cloudflare cache magic I experimented with to get @troyhunt's Pwned Passwords API cache hit ratio to ~91%, to a large JS CDN (@unpkg) during a slow traffic period. Traffic 30mins post deploy shows a growing ~94% Cache Hit Ratio (with a planned cache purge!).

— Junade Ali (@IcyApril) May 6, 2018 Case Insensitivity

One thing I realised after speaking to Stefán Jökull Sigurðarson from EVE Online was that different users were querying assets using different casing; for example, instead of range/A94A8 - a request to range/a94a8 would result in the same asset. As the Cache Key accounted for case sensitivity, the asset would be cached twice.

Unfortuantely, the API was already public with both forms of casing being acceptable once I started these optimisations.

Enter Cloudflare Workers

Instead of adjusting the cache key to solve this problem, I decided to use Cloudflare Workers - allowing me to adjust cache behaviour using JavaScript.

Troy initially had a simple worker on the site to enable CORS:

addEventListener('fetch', event => { event.respondWith(checkAndDispatchReports(event.request)) }) async function checkAndDispatchReports(req) { if(req.method === 'OPTIONS') { let responseHeaders = setCorsHeaders(new Headers()) return new Response('', {headers:responseHeaders}) } else { return await fetch(req) } } function setCorsHeaders(headers) { headers.set('Access-Control-Allow-Origin', '*') headers.set('Access-Control-Allow-Methods', 'GET') headers.set('Access-Control-Allow-Headers', 'access-control-allow-headers') headers.set('Access-Control-Max-Age', 1728000) return headers }

I added to this worker to ensure that when a request left Workers, the hash prefix would always be upper case, additionally I used the cacheKey flag to allow the Cache Key to be set directly in Workers when making the request (instead of using our internal Custom Cache Key configuration):

addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)); }) /** * Fetch request after making casing of hash prefix uniform * @param {Request} request */ async function handleRequest(request) { if(request.method === 'OPTIONS') { let responseHeaders = setCorsHeaders(new Headers()) return new Response('', {headers:responseHeaders}) } const url = new URL(request.url); if (!url.pathname.startsWith("/range/")) { const response = await fetch(request) return response; } const prefix = url.pathname.substr(7); const newRequest = "" + prefix.toUpperCase() if (prefix === prefix.toUpperCase()) { const response = await fetch(request, { cf: { cacheKey: newRequest } }) return response; } const init = { method: request.method, headers: request.headers } const modifiedRequest = new Request(newRequest, init) const response = await fetch(modifiedRequest, { cf: { cacheKey: newRequest } }) return response } function setCorsHeaders(headers) { headers.set('Access-Control-Allow-Origin', '*') headers.set('Access-Control-Allow-Methods', 'GET') headers.set('Access-Control-Allow-Headers', 'access-control-allow-headers') headers.set('Access-Control-Max-Age', 1728000) return headers }

Incidentially, our Workers team are working on some really cool stuff around controlling our cache APIs at a fine grained level, you'll be able to see some of that stuff in due course by following this blog.


Finally, Argo plays an important part in improving Cache Hit ratio. Once toggled on, it is known for optimising speed at which traffic travels around the internet - but it also means that when traffic is routed from one Cloudflare data center to another, if an asset is cached closer to the origin web server, the asset will be served from that data center. In essence, it offers Tiered Cache functionality; by making sure when traffic comes from a less user Cloudflare data center, it can still utilise cache from a data center recieving greater traffic (and more likely to have an asset in cache). This prevents an asset from having to travel all the way around the world whilst still being served from cache (even if not optimally close to the user).

Optimising Caching on Pwned Passwords (with Workers)


By using Cloudflare's caching functionality, we are able to reduce the amount of times a single asset is in cache by accidental variations in the request parameters. Workers offers a mechanism to control the cache of assets on Cloudflare, with more fine-grained controls under active development.

By implementing this on Pwned Passwords; we are able to provide developers a simple and fast interface to reduce password reuse amonst their users, thereby limiting the effects of Credential Stuffing attacks on their system. If only Irene Adler had used a password manager:

Interested in helping debug performance, cache and security issues for websites of all sizes? We're hiring for Support Engineers to join us in London, and additionally those speaking Japanese, Korean or Mandarin in our Singapore office.

Categories: Technology

Beta: EasyApache 4 updated

CloudLinux - Thu, 09/08/2018 - 14:07

New updated EasyApache packages are now available for download from our updates-testing repository.


ea-cpanel-tools 1.0-19.cloudlinux

  • EA-7549: if the PHP config file is a symlink and the force option is used, remove the symlink so the file can be written.

ea-documentroot 1.0-5.

  • updated footer logo to SVG;
  • updated index.html to set Cache-control to no-cache.

ea-nghttp2 1.32.0-1.

  • EA-7754: updated from 1.20.0 to 1.32.0.

ea-apache24-mod_suphp 0.7.2-25.cloudlinux

  • EA-7525: fixed 0008-Support-phprc_paths-section-in-suphp.conf.patch, targetMode used before initialized.

ea-php51-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php52-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php53-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php54-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php55-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php56-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php70-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php71-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

ea-php72-php-ioncube10 10.2.4-1.cloudlinux

  • EA-7753: updated from 10.2.2 to 10.2.4.

Update command:

yum groupupdate --enablerepo=cloudlinux-updates-testing ea-cpanel-tools ea-documentroot ea-nghttp2 ea-apache24-mod_suphp ea-php51-php-ioncube10 ea-php52-php-ioncube10 ea-php53-php-ioncube10 ea-php54-php-ioncube10 ea-php55-php-ioncube10 ea-php56-php-ioncube10 ea-php70-php-ioncube10 ea-php71-php-ioncube10 ea-php72-php-ioncube10
Categories: Technology

Beta: CloudLinux 7 and CloudLinux 6 Hybrid kernel updated

CloudLinux - Thu, 09/08/2018 - 11:44

CloudLinux 7 and CloudLinux 6 Hybrid kernel version 3.10.0-714.10.2.lve1.5.19.3 is now available for download from our updates-testing repository.


  • CLKRN-323: a flaw named SegmentSmack was found in the way the Linux kernel handled specially crafted TCP packets. A remote attacker could use this flaw to trigger time and calculation expensive calls to tcp_collapse_ofo_queue() and tcp_prune_ofo_queue() functions by sending specially modified packets within ongoing TCP sessions which could lead to a CPU saturation and hence a denial of service on the system. Maintaining the denial of service condition requires continuous two-way TCP sessions to a reachable open port, thus the attacks cannot be performed using spoofed IP addresses.

To install a new kernel, please use the following command:

CloudLinux 7:

yum install kernel-3.10.0-714.10.2.lve1.5.19.3.el7 --enablerepo=cloudlinux-updates-testing

CloudLinux 6 Hybrid:

yum install kernel-3.10.0-714.10.2.lve1.5.19.3.el6h --enablerepo=cloudlinux-hybrid-testing
Categories: Technology

PHP Configuration - Critical - Arbitrary PHP code execution - SA-CONTRIB-2018-055

Drupal Contrib Security - Wed, 08/08/2018 - 18:14
Project: PHP ConfigurationVersion: 8.x-1.07.x-1.0Date: 2018-August-08Security risk: Critical 17∕25 AC:Basic/A:Admin/CI:All/II:All/E:Theoretical/TD:AllVulnerability: Arbitrary PHP code executionDescription: 

This module enables you to add or overwrite PHP configuration on a drupal website.

The module doesn't sufficiently allow access to set these configurations, leading to arbitrary PHP configuration execution by an attacker.

This vulnerability is mitigated by the fact that an attacker must have a role with the permission "administer phpconfig".

After updating the module, it's important to review the permissions of your website and if 'administer phpconfig' permission is given to a not fully trusted user role, we advise to revoke it.


Install the latest version:

Also see the PHP Configuration project page.

Reported By: Fixed By: Coordinated By: 
  • mpotter of the Drupal Security Team

Categories: Technology

EasyApache 4 updated

CloudLinux - Tue, 07/08/2018 - 15:27

New updated EasyApache 4 packages are now available for download from our production repository.



  • ALTPHP-543: added support for disabling KeepListener mode to Litespeed SAPI.


  • ALTPHP-543: added support for disabling KeepListener mode to Litespeed SAPI.


  • ALTPHP-543: added support for disabling KeepListener mode to Litespeed SAPI.


  • ALTPHP-543: added support for disabling KeepListener mode to Litespeed SAPI;
  • EA-7526: ea-php*-php-fpm installs files it does not own.


  • ALTPHP-543: added support for disabling KeepListener mode to Litespeed SAPI;
  • EA-7526: ea-php*-php-fpm installs files it does not own.


  • ALTPHP-543: added support for disabling KeepListener mode to Litespeed SAPI;
  • EA-7526: ea-php*-php-fpm installs files it does not own;
  • EA-7733: updated PHP Source;
  • EA-7734: updated PHP Meta Package.


  • ALTPHP-543: added support for disabling KeepListener mode to Litespeed SAPI;
  • EA-7526: ea-php*-php-fpm installs files it does not own;
  • EA-7712: updated PHP Source;
  • EA-7713: updated PHP Meta Package.


  • ALTPHP-543: added support for disabling KeepListener mode to Litespeed SAPI
  • EA-7526: ea-php*-php-fpm installs files it does not own;
  • EA-7708: updated PHP Source;
  • EA-7709: updated PHP Meta Package.


  • ALTPHP-543: added support for disabling KeepListener mode to Litespeed SAPI;
  • EA-7526: ea-php*-php-fpm installs files it does not own;
  • EA-7704: updated PHP Source;
  • EA-7705: updated PHP Meta Package.

Update command:

yum update ea-php*
Categories: Technology

Use Cloudflare Stream to build secure, reliable video apps

CloudFlare - Tue, 07/08/2018 - 14:00
Use Cloudflare Stream to build secure, reliable video apps

It’s our pleasure to announce the general availability of Cloudflare Stream. Cloudflare Stream is the best way for any founder or developer to deliver an extraordinary video experience to their viewers while cutting development time and costs, and as of today it is available to every Cloudflare customer.

If I had to summarize what we’ve learned as we’ve built Stream it would be: Video streaming is hard, but building a successful video streaming business is even harder. This is why our goal has been to take away the complexity of encoding, storage, and smooth delivery so you can focus on all the other critical parts of your business.

Cloudflare Stream API

You call a single endpoint, Cloudflare Stream delivers a high-quality streaming experience to your visitors. Here’s how it works:

  1. Your app calls the /stream endpoint to upload a video. You can submit the contents of the video with the request or you can provide a URL to a video hosted elsewhere.
  2. Cloudflare Stream encodes the stream in multiple resolutions to enable multi-bitrate streaming. We also automatically prepare DASH and HLS manifest files.
  3. Cloudflare serves your video (in multiple resolutions) from our vast network of 150+ data centers around the world, as close as we can manage to every Internet-connected device on earth.
  4. We provide you an embed code for the video which loads the unbranded and customizable Cloudflare Stream Player.
  5. You place the embed code in your app, and you’re done.
Why Stream

Cloudflare Stream is a simple product by design. We are happy to say we don’t provide every configuration option. Instead we make the best choices possible, both on a player and network level, to deliver a high-quality experience to your visitors.

Low Cost

Cloudflare Stream does not charge you for the intensive and complex job of encoding your video in different resolutions. You pay a dollar for every 1,000 minutes of streaming, and $5/mo for every 1,000 minutes of storage, and that’s it.

Behind the scenes we are driving costs so low by having both the most peered network in the world, and by intelligently serving your video from the data center which is fastest when the user has an empty buffer, and the most affordable when their buffer is full. This gives them the experience they need, while allowing you to serve video at a lower cost than you can find from platforms which can’t make these optimizations.

Efficient Routing

Cloudflare touches as much as one in every ten web requests made over the Internet. If you read this blog you know how much energy and effort we put into optimizing that system to deliver resources faster. When applied to video, it means faster time to first frame and reduced buffering for your viewers than providers who operate at a smaller scale.

Integrated Solution

The key innovation of Stream is looking at a video as not just a bunch of bytes to be served over the Internet, but as an experience for a user. Our encoding takes into account how files will be delivered from our data centers. Our player uses its knowledge of how we deliver to provide a better experience. All of this is only made possible through working with a partner who can see the entire user experience from developer to viewer.

Common Use Cases
  • Video-on-demand: Whether you have 50 hours or 50,000 hours of video content, you can use Cloudflare Stream to make it streamable to the world.
  • Gaming: Allow your users from around the world to upload and share videos of their gameplay.
  • eLearning: Cloudflare Stream makes it a breeze to build eLearning applications that offer multi-bitrate streaming and other important features such as offline viewing and advance security tokens.
  • Video Ads: Use the Cloudflare Player to stream video ads with the confidence that your stream will be optimized for your audience.
  • Your Idea: We are here to help make the Internet better so you can build amazing things with it. Reach out with your ideas for how video can make your app, site, or service more powerful.
How to Get Started

To get started, simply sign-up for Cloudflare and visit the Stream tab! As of today it is generally available for every Cloudflare user. If you’re an Enterprise customer, speak with your Cloudflare team.

Have a question or idea? Reach out in the community forum.

Categories: Technology

How to design an effective welcome email

Postmark - Tue, 07/08/2018 - 14:00

The welcome email is a key part of a customer’s first interaction with your product. They’ve just created an account and are working to get set up and verify your product meets their needs. It’s a great opportunity to provide customers with useful information that helps them explore your product.

We’ve written extensively about what makes a great welcome email in the past. In this post, we’re going to look at some examples from popular products to inspire you to create effective welcome emails of your own.

Slack Slack Welcome Email Slack Welcome Email

Slack’s welcome email does a great job of providing you with an overview of all your account details. Each team in Slack has a subdomain which is used to access the product, so including the URL here is a great move. The email also has a reminder of which plan you are on, with a quick summary of what that plan includes.

There’s a clear callout for Slack’s guides to help new customers get the most out of the product.

The main pitfall of this email is there’s no information on how to contact support if you have a question or need some help getting set up. Even the Reply-To address is a no-reply email address. Simply updating this to a support address would give users a quick way of getting help if they need it.

Harvest Harvest Welcome Email Harvest Welcome Email

Harvest’s welcome email is a great example of how to provide a wealth of useful information in a short and concise format. This email includes all your important account information, links to helpful resources, and a clear call to action that links the customer back into the onboarding flow.

Replies go straight to the Harvest support team, making it easy for customers to quickly ask any questions they have. 

Basecamp Basecamp 3 Welcome Email Basecamp 3 Welcome Email

Basecamp’s welcome email is also simple and to-the-point. I love how the opening paragraph re-iterates Basecamp’s mission in a way that’s relatable to the customer. As with the other examples we’ve looked at so far, there’s a link to your account and some info on how to get help should you need it. Note that replies also go to the support team.

One thing missing here is a clear next step action. Basecamp has a great collection of getting started videos on their Learn page which would be a good resource to highlight in the welcome email.

FreeAgent FreeAgent Activation Email FreeAgent Activation Email

FreeAgent sends out two different emails during the sign-up process. The first includes a summary of your account details and a prompt to confirm your email address and activate your account.

It’s pretty standard to see activation links in welcome emails. Sometimes it’s necessary to help limit fraudulent signups, but there are other ways of validating that emails are correct. For example, many email providers have webhooks that can notify your application if an email bounces, which is a great way of doing passive email confirmation. If your app receives a bounce notification, you can mark the email as broken in your database and show the customer a prompt to update their email address next time they access your app.

FreeAgent Welcome Email FreeAgent Welcome Email

FreeAgent’s main welcome email contains a clear next step action to view their collection of getting started videos. This email also includes more detailed information on how to get support.

You should generally avoid sending more than one welcome email. Sending a bunch of different emails in a short amount of time can overwhelm customers which is a surefire way to lower your engagement rates.

That said, in FreeAgent’s case sending two emails is okay. Each email has an obvious call-to-action, and combining all the content into a single email may become overwhelming in itself. 

Headspace HeadSpace Welcome Email HeadSpace Welcome Email

Headspace’s welcome email starts with a reminder of the benefits customers can expect from using the product. It includes beautiful visuals that set a calm and approachable tone. A clear call-to-action prompts customers to get back into the app and try the product.

The email also includes a quick overview of the journey a customer can expect to take with Headspace. Starting with some basic techniques and then moving on to broader topics. It helps set expectations and combat any anxiety the customer might have about trying something new.

There are also pointers on where to get help, and responses to the Reply-To address go directly to the support team.


In this post, we’ve looked at examples of welcome emails from five different products. Here are some of the common themes we observed:

  • Friendly copy that reiterates the benefits of using the product
  • Clear call-to-actions that prompt customers to engage with the product
  • Important account details like login URLs, email addresses, and subscription plans
  • Resources to help customers get started with the product
  • Reply-To addresses that go to customer support
  • Links and phone numbers to contact support
Learn more about designing great welcome emails

Now that you’re feeling inspired, here are some more resources to help you build great welcome emails for your applications.

Categories: Technology

Additional Record Types Available with Cloudflare DNS

CloudFlare - Mon, 06/08/2018 - 17:45
Additional Record Types Available with Cloudflare DNS

Additional Record Types Available with Cloudflare DNS
Photo by Mink Mingle / Unsplash

Cloudflare recently updated the authoritative DNS service to support nine new record types. Since these records are less commonly used than what we previously supported, we thought it would be a good idea to do a brief explanation of each record type and how it is used.


DNSKEY and DS work together to allow you to enable DNSSEC on a child zone (subdomain) that you have delegated to another Nameserver. DS is useful if you are delegating DNS (through an NS record) for a child to a separate system and want to keep using DNSSEC for that child zone; without a DS entry in the parent, the child data will not be validated. We’ve blogged about the details of Cloudflare’s DNSSEC implementation and why it is important in the past, and this new feature allows for more flexible adoption for customers who need to delegate subdomains.

Certificate Related Record Types

Today, there is no way to restrict which TLS (SSL) certificates are trusted to be served for a host. For example if an attacker were able to maliciously generate an SSL certificate for a host, they could use a man-in-the-middle attack to appear as the original site. With SSHFP, TLSA, SMIMEA, and CERT, a website owner can configure the exact certificate public key that is allowed to be used on the domain, stored inside the DNS and secured with DNSSEC, reducing the risk of these kinds of attacks working.

It is critically important that if you rely on these types of records that you enable and configure DNSSEC for your domain.


This type of record is an answer to the question “When I’m connecting via SSH to this remote machine, it’s authenticating me, but how do I authenticate it?” If you’re the only person connecting to this machine, your SSH client will compare the fingerprint of the public host key to the one it kept in the known_hosts file during the first connection. However across multiple machines or multiple users from an organization, you need to verify this information against a common source of trust. In essence, you need the equivalent of the authentication that a certificate authority provides by signing an HTTPS certificate, but for SSH. Although it’s possible to set certificate authorities for SSH and to have them sign public host keys, another way is to publish the fingerprint of the keys in the domain via the SSHFP record type.

Again, for these fingerprints to be trustworthy it is important to enable DNSSEC on your zone.

The SSHFP record type is similar to TLSA record. You are specifying the algorithm type, the signature type, and then the signature itself within the record for a given hostname.

If the domain and remote server have SSHFP set and you are running an SSH client (such as OpenSSH 5.1+) that supports it, you can now verify the remote machine upon connection by adding the following parameters to your connection:

❯ ssh -o "VerifyHostKeyDNS=yes" -o "StrictHostKeyChecking=yes" [insertremoteserverhere]


TLSA records were designed to specify which keys are allowed to be used for a given domain when connecting via TLS. They were introduced in the DANE specification and allow domain owners to announce which certificate can and should be used for specific purposes for the domain. While most major browsers do not support TLSA, it may still be valuable for non browser specific applications and services.

For example, I’ve set a TLSA record for the domain for TCP traffic over port 443. There are a number of ways to generate the record, but the easiest is likely through Shuman Huque’s tool.

For most of the examples in this post we will be using kdig rather than the ubiquitous dig. Generally preinstalled dig versions can be old and may not handle newer record types well. If your queries do not quite match up, you should either upgrade your version of dig or install knot.

;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 2218 ;; Flags: qr rd ra ad; QUERY: 1; ANSWER: 2; AUTHORITY: 0; ADDITIONAL: 1 ;; QUESTION SECTION: ;; IN TLSA ;; ANSWER SECTION: 300 IN TLSA 3 1 1 4E48ED671DFCDF6CBF55E52DBC8B9C9CC21121BD149BC24849D1398DA56FB242 300 IN RRSIG TLSA 13 4 300 20180803233834 20180801213834 35273 JvC9mZLfuAyEHZUZdq4n8kyRbF09vwgx4c1fas24Ag925LILr1armjHbr7ZTp8ycS/Go3y3lgyYCuBeW/vT/3w== ;; Received 232 B ;; Time 2018-08-02 15:38:34 PDT ;; From in 28.5 ms

From the above request and response, we can see that a) the response for the zone is secured and signed with DNSSEC (Flag: ad) and that I should be verifying a certificate with the public key (3 1 1) SHA256 hash (3 1 1) of 4E48ED671DFCDF6CBF55E52DBC8B9C9CC21121BD149BC24849D1398DA56FB242. We can use openssl (v1.1.x or higher) to verify the results:

❯ openssl s_client -connect -dane_tlsa_domain "" -dane_tlsa_rrdata " 3 1 1 4e48ed671dfcdf6cbf55e52dbc8b9c9cc21121bd149bc24849d1398da56fb242" CONNECTED(00000003) depth=0 C = US, ST = CA, L = San Francisco, O = "CloudFlare, Inc.", CN = verify return:1 --- Certificate chain 0 s:/C=US/ST=CA/L=San Francisco/O=CloudFlare, Inc./ i:/C=US/ST=CA/L=San Francisco/O=CloudFlare, Inc./CN=CloudFlare Inc ECC CA-2 1 s:/C=US/ST=CA/L=San Francisco/O=CloudFlare, Inc./CN=CloudFlare Inc ECC CA-2 i:/C=IE/O=Baltimore/OU=CyberTrust/CN=Baltimore CyberTrust Root --- Server certificate -----BEGIN CERTIFICATE----- MIIE7jCCBJSgAwIBAgIQB9z9WxnovNf/lt2Lkrfq+DAKBggqhkjOPQQDAjBvMQsw ... --- SSL handshake has read 2666 bytes and written 295 bytes Verification: OK Verified peername: DANE TLSA 3 1 1 ...149bc24849d1398da56fb242 matched EE certificate at depth 0

SMIMEA records function similar to TLSA but are specific to email addresses. The domain for these records should be prefixed by “_smimecert.” and specific formatting is required to attach a SMIMEA record to an email address. The local-part (username) of the email address must be treated in a specific format and SHA-256 hashed as detailed in the RFC. From the RFC example: “ For example, to request an SMIMEA resource record for a user whose email address is "", an SMIMEA query would be placed for the following QNAME:


CERT records are used for generically storing certificates within DNS and are most commonly used by systems for email encryption. To create a CERT record, you must specify the certificate type, the key tag, the algorithm, and then the certificate, which is either the certificate itself, the CRL, a URL of the certificate, or fingerprint and a URL.

Other Newly Supported Record Types PTR

PTR (Pointer) records are pointers to canonical names. They are similar to CNAME in structure, meaning they only contain one FQDN (fully qualified domain name) but the RFC dictates that subsequent lookups are not done for PTR records, the value should just be returned back to the requestor. This is different to a CNAME where a recursive resolver would follow the target of the canonical name. The most common use of a PTR record is in reverse DNS, where you can look up which domains are meant to exist at a given IP address. These are useful for outbound mailservers as well as authoritative DNS servers.

It is only possible to delegate the authority for IP addresses that you own from your Regional Internet Registry (RIR). Creating reverse zones and PTR records for IPs that you can not (or do not) delegate does not serve any practical purpose.

For example, looking up the A record for gives us the IP of

❯ kdig a +short

Now imagine we want to know if the owner of this IP ‘authorizes’ to point to it. Reverse Zones are specifically crafted child zones within (for IPv4) and (for IPv6) whom are delegated via the Regional Internet Registries to the owners of the IP address space. That is to say if you own a /24 from ARIN, ARIN will delegate the reverse zone space for your /24 to you to control. The IPv4 address is represented inverted as the subdomain in Since Cloudflare owns the IP, we’ve delegated the reverse zone and created a PTR there.

❯ kdig -x ;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 18658 ;; Flags: qr rd ra; QUERY: 1; ANSWER: 1; AUTHORITY: 0; ADDITIONAL: 0 ;; QUESTION SECTION: ;; IN PTR ;; ANSWER SECTION: 1222 IN PTR

For completeness, here is the +trace for the zone. We can see that the /24 has been delegated to Cloudflare from ARIN:

❯ dig +trace ; <<>> DiG 9.8.3-P1 <<>> +trace ;; global options: +cmd . 48419 IN NS . 48419 IN NS . 48419 IN NS . 48419 IN NS . 48419 IN NS . 48419 IN NS . 48419 IN NS . 48419 IN NS . 48419 IN NS . 48419 IN NS . 48419 IN NS . 48419 IN NS . 48419 IN NS ;; Received 228 bytes from 2001:4860:4860::8888#53(2001:4860:4860::8888) in 25 ms 172800 IN NS 172800 IN NS 172800 IN NS 172800 IN NS 172800 IN NS 172800 IN NS ;; Received 421 bytes from in 8 ms 86400 IN NS 86400 IN NS 86400 IN NS 86400 IN NS 86400 IN NS 86400 IN NS ;; Received 165 bytes from in 300 ms 86400 IN NS 86400 IN NS ;; Received 95 bytes from 2001:500:13::63#53(2001:500:13::63) in 188 ms NAPTR

Naming Authority Pointer Records are used in conjunction with SRV records, generally as a part of the SIP protocol. NAPTR records point to domains to specific services, if available for that domain. Anders Brownworth has an excellent description in detail on his blog. The start of his example, with his permission:

Let’s consider a call to Given only this address though, we don't know what IP address, port or protocol to send this call to. We don't even know if supports SIP or some other VoIP protocol like H.323 or IAX2. I'm implying that we're interested in placing a call to this URL but if no VoIP service is supported, we could just as easily fall back to emailing this user instead. To find out, we start with a NAPTR record lookup for the domain we were given:

#host -t NAPTR NAPTR 10 100 "S" "SIP+D2U" "" NAPTR 20 100 "S" "SIP+D2T" "" NAPTR 30 100 "S" "E2U+email" "!^.*$!!i"

Here we find that gives us three ways to contact, the first of which is "SIP+D2U" which would imply SIP over UDP at


Uniform Resource Identifier records are commonly used as a compliment to NAPTR records and per the RFC, can be used to replace SRV records. As such, they contain a Weight and Priority field as well as Target, similar to SRV.

One use case is proposed by this draft RFC is to replace SRV records with URI records for discovering Kerberos key distribution centers (KDC). It minimizes the number of requests over SRV records and allows the domain owner to specify preference for TCP or UDP.

In the below example, it specifies that we should use a KDC on TCP at the default port and UDP on port 89 should the primary connection fail.

❯ kdig URI ;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 8450 ;; Flags: qr rd ra; QUERY: 1; ANSWER: 2; AUTHORITY: 0; ADDITIONAL: 0 ;; QUESTION SECTION: ;; IN URI ;; ANSWER SECTION: 283 IN URI 1 10 "" 283 IN URI 1 20 "" Summary

Cloudflare now supports CERT, DNSKEY, DS, NAPTR, PTR, SMIMEA, SSHFP, and TLSA in our authoritative DNS products. We would love to hear if you have any interesting example use cases for the new record types and what other record types we should support in the future.

Our DNS engineering teams in London and San Francisco are both hiring if you would like to contribute to the fastest authoritative and recursive DNS services in the world.

Software Engineer

Categories: Technology

Beta: CloudLinux 7 and CloudLinux 6 Hybrid kernel updated

CloudLinux - Mon, 06/08/2018 - 16:17

CloudLinux 7 and CloudLinux 6 Hybrid kernel version 3.10.0-793.21.1.lve1.5.20 is now available for download from our updates-testing repository.

Main features:

  • KASLR support.

    Kernel address space layout randomization (KASLR), which was previously available as a Technology Preview, is fully supported in Red Hat Enterprise Linux 7.5 on the AMD64 and Intel 64 architectures. KASLR is a kernel feature that contains two parts, kernel text KASLR and mm KASLR.
    These two parts work together to enhance the security of the Linux kernel.

  • Retpoline support.

    A retpoline is designed to protect against the branch target injection (CVE‌-2017-5715) exploit. This is an attack where an indirect branch instruction in the kernel is used to force the speculative execution of an arbitrary chunk of code. The chosen code is a "gadget" that is somehow useful to an attacker. For example, a code can be chosen so that it will leak kernel data through how it affects the cache. The retpoline prevents this exploit by simply replacing all indirect branch instructions with a return instruction.

  • CLKRN‌-290: fixed CVE‌-2018-3665.

    The security flaw takes advantage of one of the ways the Linux kernel saves and restores the state of the Floating Point Unit (FPU) when switching tasks – specifically the Lazy FPU Restore scheme. Malware or malicious users can take advantage of the vulnerability to grab encryption keys.


  • CLKRN‌-272: added a workaround to avoid crash with 32bit binary;
  • CLKRN‌-314: fixed boot on Xen in PV mode;
  • CLKRN‌-319: fixed assertion on XFS partition file removal;
  • CLKRN‌-320: fixed CVE‌-2017-18344.

To install a new kernel, please use the following command:

CloudLinux 7:

yum install kernel-3.10.0-793.21.1.lve1.5.20.el7 --enablerepo=cloudlinux-updates-testing

CloudLinux 6 Hybrid:

yum install kernel-3.10.0-793.21.1.lve1.5.20.el6h --enablerepo=cloudlinux-hybrid-testing
Categories: Technology


Subscribe to aggregator - Technology
Additional Terms