Blogroll Category: Technology

I read blogs, as well as write one. The 'blogroll' on this site reproduces some posts from some of the people I enjoy reading. There are currently 164 posts from the category 'Technology.'

Disclaimer: Reproducing an article here need not necessarily imply agreement or endorsement!

Creating a single pane of glass for your multi-cloud Kubernetes workloads with Cloudflare

CloudFlare - Fri, 23/02/2018 - 17:00
Creating a single pane of glass for your multi-cloud Kubernetes workloads with Cloudflare

(This is a crosspost of a blog post originally published on Google Cloud blog)

One of the great things about container technology is that it delivers the same experience and functionality across different platforms. This frees you as a developer from having to rewrite or update your application to deploy it on a new cloud provider—or lets you run it across multiple cloud providers. With a containerized application running on multiple clouds, you can avoid lock-in, run your application on the cloud for which it’s best suited, and lower your overall costs.

If you’re using Kubernetes, you probably manage traffic to clusters and services across multiple nodes using internal load-balancing services, which is the most common and practical approach. But if you’re running an application on multiple clouds, it can be hard to distribute traffic intelligently among them. In this blog post, we show you how to use Cloudflare Load Balancer in conjunction with Kubernetes so you can start to achieve the benefits of a multi-cloud configuration.

To continue reading follow the Google Cloud blog here or if you are ready to get started we created a guide on how to deploy an application using Kubernetes on GCP and AWS along with our Cloudflare Load Balancer.

Creating a single pane of glass for your multi-cloud Kubernetes workloads with Cloudflare

Categories: Technology

Kathmandu, Nepal is data center 123

CloudFlare - Fri, 23/02/2018 - 01:13
Kathmandu, Nepal is data center 123

Kathmandu, Nepal is data center 123

We said that we would head to the mountains for Cloudflare’s 123rd data center, and mountains feature prominently as we talk about Kathmandu, Nepal, home of our newest deployment and our 42nd data center in Asia!

Five and three quarter key facts to get started:

  • Nepal is home to the highest mountain in the world.
  • Kathmandu has more UNESCO heritage sites in its immediate area than any other capital!
  • The Nepalese flag isn’t a rectangle. It’s not even close!
  • Nepal has never been conquered or ruled by another country.
  • Kathmandu, Nepal is where Cloudflare has placed its 123rd data center.
  • Nepal’s timezone is 5 hours 45 minutes ahead of GMT.
Mountains

The mountainous nation of Nepal is home to Mount Everest, the highest mountain in the world, known in Nepali as Sagarmāthā. Most of us learn that at school; however there’s plenty of other mountains located in Nepal. Here’s the ones above 8,000 meters (extracted from the full list) to get you started:

  • Mount Everest at 8,848 meters
  • Kanchenjunga at 8,586 meters
  • Lhotse at 8,516 meters
  • Makalu at 8,463 meters
  • Cho Oyu at 8,201 meters
  • Dhaulagiri I at 8,167 meters
  • Manaslu at 8,156 meters
  • Annapurna I at 8,091 meters

Kathmandu, Nepal is data center 123Photo of Ammapurna taken outside Pokhara by blog author

As we said, Kathmandu and Nepal are very mountainous region! Some of these mountains are shared with neighboring countries. In-fact, the whole Himalayan range stretches much further than just Nepal as it also encompasses neighboring countries such as Bhutan, China, India, and Pakistan.

Nepal’s flag

The official flag of Nepal is not a rectangle, it’s a very unique shape. No other flag comes close to having this shape. At first viewing it looks simple, just two triangles (representing the Himalayas) placed vertically against the flagpole. Nope - the triangles have a very specific dimension.

Kathmandu, Nepal is data center 123

The flags symbolism goes beyond the mountains. The two white symbols represent the calming moon and fierce sun or sometimes the cool weather in the Himalayas and warm lowlands.

But back to those two triangles. Ignoring the old adage ("It was my understanding that there would be no math."), let’s grab what Wikipedia says about the shape of this flag and see if we can follow along.

First off, let’s explain irrational vs rational numbers (or ratios or fractions). A rational number is a simple P/Q number like 1/2 or 3/4 or 1/5 or even 16/9 etc etc. The numerator and denominator must both be integers. Even 100003/999983 (using prime numbers) is a rational number. If the denominator is 0 then the number isn’t rational.

An irrational number is everything else. If it can’t be written as P/Q, then it’s irrational. This means a number that doesn’t terminate or doesn’t become periodic. For example, π or Pi (3.141592653589793238462…), e or Euler's Number (2.718281828459045235360…), the square root of 2 (1.414213562373095…), etc. These are all irrational numbers. Don’t be fooled by 4/3. While it’s impossible to full write out (1.33333… continuing forever), yet it’s actually a rational number.

(Read up about Hippasus, who’s credited with discovering the existence of irrational numbers, if you want an irrational murder story!)

That’s enough math theory; now back to the Nepali flag. Each red triangle has a rational ratio. The flag is started with simple 1:1 and 3:4 ratios and that’s the easy part. We are all capable of grabbing paper or cloth and make a rectangle that’s 3 inches by 4 inches, or 1.5 meters by 2 meters or any 3:4 ratio. It’s simple math. What gets complicated is adding the blue border. For that, we need to read-up what was said by the On-Line Encyclopedia of Integer Sequences (OEIS Foundation) and Wikipedia. They both go into great depth to describe the full mathematical dimensions of the flag. Let’s just paraphrase them slightly:

Kathmandu, Nepal is data center 123

However the math (and geometry) award goes to the work done in order to produce "Calculation of the aspect ratio of the national flag of Nepal". The final geometric drawing is this:

Kathmandu, Nepal is data center 123
Berechnung des Seitenverhältnisses der Nationalfahne von Nepal

Yeah - that’s going a bit too far for a Cloudflare blog! Let’s just say that Nepal’s flag is unique and quite interesting.

APRICOT 2018

We are especially excited to announce our Kathmandu data center while attending APRICOT conference, being held in Nepal this year. The event, supported by APNIC, the local Regional Internet address Registry (RIR) for the Asia-Pacific region, attracts leaders from Internet industry technical, operational, and policy-making communities. Cloudflare's favorite part of APRICOT is the Peering Forum track on Monday.

SAARC

Nepal is just one of eight countries making up the SAARC (South Asian Association for Regional Cooperation) organization. Headquartered in Kathmandu, it comprises Afghanistan, Bangladesh, Bhutan, India, Nepal, the Maldives, Pakistan and Sri Lanka.

Cloudflare has already deployed into neighboring India, and any astute reader of these blogs will know we are always working on adding sites where and when we can. The SAARC countries are in our focus.

Build, build, build!

For Cloudflare‘s next two data centers, we head to a different continent and way-south of the equator. Throughout 2018, we’ll announce a stream of deployments across many different cities that each improve the security and performance of millions of our customers.

If you’re motivated by the idea of helping build one of the world's largest networks, come join our team!

Categories: Technology

Drupal 8.5.0-rc1 is available for testing

Drupal - Thu, 22/02/2018 - 18:28

The first release candidate for the upcoming Drupal 8.5.0 release is now available for testing. Drupal 8.5.0 is expected to be released March 7.

Download Drupal-8.5.0-rc1

8.5.x makes the Media module available for all, improves migrations significantly, stabilizes the Content Moderation and Settings Tray modules, serves dynamic pages faster with BigPipe enabled by default, and introduces the new experimental Layout Builder module. The release includes several very important fixes for workflows of content translations and supports PHP 7.2. Finally, 8.5.0-rc1 also includes the same security updates that are provided in 8.4.5.

What does this mean to me? For Drupal 8 site owners

Drupal 8.4.5, a security update and the final release of the 8.4.x series, has also been released this week. 8.4.x sites should update immediately to 8.4.5, but going forward, 8.4.x will receive no further releases following 8.5.0's release date, and sites should prepare to update from 8.4.x to 8.5.x in order to continue getting bug and security fixes. Use update.php to update your 8.4.x sites to the 8.5.x series, just as you would to update from (e.g.) 8.4.2 to 8.4.3. You can use this release candidate to test the update. (Always back up your data before updating sites, and do not test updates in production.)

If you're an early tester who is already running 8.5.0-alpha1 or 8.5.0-beta1, you should update to 8.5.0-rc1 immediately. 8.5.0-rc1 includes security fixes (the same fixes that were released in Drupal 8.4.5).

Site owners should also take note of the fact that Drupal 8's support for PHP 5 will end in one year, in March 2019. PHP 7.2 is now the best recommended PHP version to use with Drupal 8.

For module and theme authors

Drupal 8.5.x is backwards-compatible with 8.4.x. However, it does include internal API changes and API changes to experimental modules, so some minor updates may be required. Review the change records for 8.5.x, and test modules and themes with the release candidate now.

For translators

Some text changes were made since Drupal 8.4.0. Localize.drupal.org automatically offers these new and modified strings for translation. Strings are frozen with the release candidate, so translators can now update translations.

For core developers

All outstanding issues filed against 8.4.x were automatically migrated to 8.5.x. Future bug reports should be targeted against the 8.5.x branch. 8.6.x will remain open for new development during the 8.5.x release candidate phase. The 8.5.x branch will be subject to release candidate restrictions, with only critical fixes and certain other limited changes allowed.

Your bug reports help make Drupal better!

Release candidates are a chance to identify bugs for the upcoming release, so help us by searching the issue queue for any bugs you find, and filing a new issue if your bug has not been reported yet.

Categories: Technology

Beta: Alt-PHP updated

CloudLinux - Thu, 22/02/2018 - 17:35

New updated Alt-PHP packages are now available for download from our updates-testing repository.

Changelog:

alt-php52-5.2.17-116

  • ALTPHP-450: fix for Bug #71335 Type Confusion in WDDX Packet Deserialization.

alt-php53-5.3.29-73

  • ALTPHP-450: fix for Bug #71335 Type Confusion in WDDX Packet Deserialization;
  • ALTPHP-451: fix for CVE-2016-7478 Unserialize Exception object can lead to infinite loop.

alt-php54-5.4.45-53

  • ALTPHP-450: Fix for Bug #71335 Type Confusion in WDDX Packet Deserialization.

Install command:

yum groupinstall alt-php --enablerepo=cloudlinux-updates-testing
Categories: Technology

Reseller Limits Beta is now available for Plesk!

CloudLinux - Thu, 22/02/2018 - 16:52

You asked — we do! LVE Manager version 3.1-3 with Reseller Limits support for Plesk is released to Beta! CloudLinux OS team has been working on this feature for over twenty months and we are happy it is now available for more customers!

All the Reseller Limits features and improvements that were previously introduced to the cPanel users are now available in Plesk too. Now Plesk hosters can set limits for the resources each reseller can operate with and provide controls to the reseller on what resources each end user will have.

 

We encourage you to help us with beta testing and to install this feature on your servers. We’d be extremely grateful for any feedback provided!

Please note that to use Reseller Limits you would have to install kernel that supports the feature. The latest stable kernel with Reseller Limits support is 3.10.0-714.10.2.lve1.5.12 (we also recommend to try our newest beta kernel that includes Meltdown / Spectre fixes for Xen PV). If you use CloudLinux 6 kernel you would first have to migrate to a Hybrid kernel since the feature is only supported in CloudLinux 7 and CloudLinux 6 Hybrid kernels. Follow the instruction below to do so at https://docs.cloudlinux.com/index.html?hybrid_kernel.html.

To find out more on how to operate Reseller Limits, please read this documentation article. We also recommend watching the CloudLinux Academy webinar where our CEO and Founder Igor Seletskiy reviews the Reseller Limits functionality.

If you've missed the previous Beta releases of the feature for cPanel and its advantages, please follow the links to check information on its Production release.

To install new beta from our update-testing repository, please use the following command:

yum update lvemanager lve-stats lve-utils alt-python27-cllib --enablerepo=cloudlinux-updates-testing

Please find a changelog below.

lvemanager-3.1-3

  • WEB-879: fixed title of reseller options for faults notify;
  • WEB-809: adapted the "Users" tab of reseller plugin in Plesk;
  • WEB-869: fixed an error while trying to open statistic frame for reseller with no limits on Plesk;
  • WEB-874: fixed report text in Resource Usage for CPU usage (DirectAdmin, user's plugin);
  • WEB-858: fixed an issue when sorting is not working;
  • WEB-884: fixed an error entry into LVE Manager plugin via direct link in Plesk's sidebar (wrong checking of csrftoken).

lve-utils-2.1-3

  • LU-460: NameMap is now used instead of ClPwd to get reseller id;
  • LVES-818: implemented reseller limits to lvectl for Plesk;
  • LU-572: missing user without package in 'getcontrolpaneluserspackages' commands (Plesk).

alt-python27-cllib-1.3-4

  • PTCLLIB-105: main domain is now first in 'userdomains' result (Plesk);
  • LVES-770: fixed an issue when cloudlirnux-top doesn't work on Plesk with reseller;
  • LU-448: implemented resellers method on Plesk.

lve-stats-2.8-2

  • LVES-878: fix for reseller historical usage request on Plesk;
  • LVES-846: fixed reseller's name field in cl-top and cl-stats (Plesk);
  • LVES-861: hoster's period is now used to notify reseller about user's faults.
Categories: Technology

#PressForProgress - International Women’s Day 2018 | A Cloudflare & Branch Event

CloudFlare - Thu, 22/02/2018 - 15:46
#PressForProgress - International Women’s Day 2018 | A Cloudflare & Branch Event

Almost a year ago, I began my journey in the tech industry at a growing company called Cloudflare. I’m a 30-something paralegal and although I didn’t know how to write code (yet), I was highly motivated and ready to crush. I had worked hard for the previous two years, focused on joining a thriving company where I could grow my intelligence, further develop my skill set and work alongside successful professionals. And finally, my hard work paid off; I landed the job at Cloudflare and booked a seat on the rocket ship.

After the initial whirlwind that accompanies this fast-paced field subsided, motivation, inspiration, success, momentum and endurance began to flood my neurons. I loved the inner workings of a successful startup, felt the good and bad of the tech industry, related to and admired the female executives and most importantly, wanted to give something back to the community that adopted me.

#PressForProgress - International Women’s Day 2018 | A Cloudflare & Branch Event
Venus Approaching the Sun Source: Flickr

During a routine chat with my dad, I pitched what I thought was a crazy idea. Crazy because I was so used to being told “no” at previous jobs, used to not having my ideas taken seriously, and also used to not being given opportunities in my career. My idea was simple: “Wouldn’t it be great to have an International Women’s Day event at Cloudflare?” We talked and texted for days about the idea. It had merit and as scared as I was, I wanted to pitch it! As my dad and I discussed the idea further, it evolved into a full-blown plan of inviting renowned female influencers to attend and share their experiences and accomplishments of working in the tech industry. I wanted it to be a motivational celebration.

After receiving a quick green light from my supervisor and chatting with executives, it happened. Cloudflare got behind the event. 100 percent. And why wouldn’t they? Cloudflare relies on the best and the brightest to do what we do, no matter what. Of course Cloudflare would support an event for kick-ass women!

#PressForProgress - International Women’s Day 2018 | A Cloudflare & Branch Event
Source: Pixnio

Please join Cloudflare and Branch as we join forces to celebrate the evolution of women in technology at our first annual International Women’s Day event!

From Ada Lovelace to Grace Hopper to Katherine Johnson to the incredible panel we’ll hear from at this event, women in technology have always pressed for progress.

The road isn’t always easy to navigate, but it’s more important than ever to remain steadfast and push forward to equity and parity regardless of gender.

At this lunchtime social, we’ll take a short trip and highlight three legendary women in technology over the last 50 years, and then dive into a panel discussion with three female founders. We’ll hear a little about each one’s journey in their respective industries and touch on their view of what it means in today’s climate to press for progress. We’ll open it up at the end for Q&A from the audience.

Lunch will be provided, and there will also be time for networking and connecting with other women in technology.

#PressForProgress - International Women’s Day 2018 | A Cloudflare & Branch Event
Source: Branch.io

Had this 30-something paralegal not pressed for that better job and instead bottled my voice and refrained from sharing my “crazy” idea, no progress would have been made. Working in the tech industry, and specifically for Cloudflare, has allowed me to pursue my dreams, live out my ideas and pave the way for the women of tomorrow. I’m so excited to bring this event to fruition, and I can say on behalf of Cloudflare and Branch that we hope to see you there!

Categories: Technology

JSON API - Moderately critical - Multiple Vulnerabilities - SA-CONTRIB-2018-015

Drupal Contrib Security - Wed, 21/02/2018 - 20:12
Project: JSON APIDate: 2018-February-21Security risk: Moderately critical 13∕25 AC:Complex/A:User/CI:Some/II:Some/E:Theoretical/TD:AllVulnerability: Multiple Vulnerabilities Description: 

This module provides a JSON API standards-compliant API for accessing and manipulating Drupal content and configuration entities.

  • The module doesn't sufficiently associate cacheability metadata in certain situations thereby causing an access bypass vulnerability.

    This vulnerability is mitigated by the fact that an attacker cannot trigger an exploitable situation themselves.

  • The module doesn't sufficiently check access in certain situations.

    This vulnerability is mitigated by the fact that an attacker must have permission to create entities of certain content entity types.

Update: This is fixed in 8.x-1.10 not 8.x-1.9Solution: 

Install the latest version:

Reported By: Fixed By: Coordinated By: 
Categories: Technology

CKEditor Upload Image - Critical - Access bypass - SA-CONTRIB-2018-014

Drupal Contrib Security - Wed, 21/02/2018 - 19:04
Project: CKEditor Upload ImageDate: 2018-February-21Security risk: Critical 15∕25 AC:None/A:None/CI:None/II:Some/E:Theoretical/TD:AllVulnerability: Access bypassDescription: 

This module enables you to drag and drop or paste images into CKEditor.
The module does not sufficiently verify users permissions, which leads to anonymous users being able to upload files to the server.

Solution: 

Install the latest version:

Reported By: Fixed By: Coordinated By: 
Categories: Technology

Validating Leaked Passwords with k-Anonymity

CloudFlare - Wed, 21/02/2018 - 19:00
Validating Leaked Passwords with k-Anonymity

Validating Leaked Passwords with k-Anonymity

Today, v2 of Pwned Passwords was released as part of the Have I Been Pwned service offered by Troy Hunt. Containing over half a billion real world leaked passwords, this database provides a vital tool for correcting the course of how the industry combats modern threats against password security.

I have written about how we need to rethink password security and Pwned Passwords v2 in the following post: How Developers Got Password Security So Wrong. Instead, in this post I want to discuss one of the technical contributions Cloudflare has made towards protecting user information when using this tool.

Cloudflare continues to support Pwned Passwords by providing CDN and security functionality such that the data can easily be made available for download in raw form to organisations to protect their customers. Further; as part of the second iteration of this project, I have also worked with Troy on designing and implementing API endpoints that support anonymised range queries to function as an additional layer of security for those consuming the API, that is visible to the client.

This contribution allows for Pwned Passwords clients to use range queries to search for breached passwords, without having to disclose a complete unsalted password hash to the service.

Getting Password Security Right

Over time, the industry has realised that complex password composition rules (such as requiring a minimum amount of special characters) have done little to improve user behaviour in making stronger passwords; they have done little to prevent users from putting personal information in passwords, avoiding common passwords or prevent the use of previously breached passwords[1]. Credential Stuffing has become a real threat recently; username and passwords are obtained from compromised websites and then injected into other websites until you find user accounts that are compromised.

This fundamentally works because users reuse passwords across different websites; when one set of credentials is breached on one site, this can be reused on other websites. Here are some examples of how credentials can be breached from insecure websites:

  • Websites which don't use rate limiting or challenge login requests can have a users log-in credentials breached using brute force attacks of common passwords for a given user,
  • database dumps from hacked websites can be taken offline and the password hashes can be cracked; modern GPUs make this very efficient for dictionary passwords (even with algorithms like Argon2, PBKDF2 and BCrypt),
  • many websites continue not to use any form of password hashing, once breached they can be captured in raw form,
  • Man-in-the-Middle Attacks or hijacking a web server can allow for capturing passwords before they're hashed.

This becomes a problem with password reuse; having obtained real life username/password combinations - they can be injected into other websites (such as payment gateways, social networks, etc) until access is obtained to more accounts (often of a higher value to the original compromised site).

Under recent NIST guidance, it is a requirement, when storing or updating passwords, to ensure they do not contain values which are commonly used, expected or compromised[2]. Research has found that 88.41% of users who received the fear appeal later set unique passwords, whilst only 4.45% of users who did not receive a fear appeal would set a unique password[3].

Unfortunately, there are a lot of leaked passwords out there; the downloadable raw data from Pwned Passwords currently contains over 30 GB in password hashes.

Anonymising Password Hashes

The key problem in checking passwords against the old Pwned Passwords API (and all similar services) lies in how passwords are checked; with users being effectively required to submit unsalted hashes of passwords to identify if the password is breached. The hashes must be unsalted, as salting them makes them computationally difficult to search quickly.

Currently there are two choices that are available for validating in whether a password is or is not leaked:

  • Submit the password (in an unsalted hash) to a third-party service, where the hash can potentially be stored for later cracking or analysis. For example; if you make an API call for a leaked password to a third-party API service using a WordPress plugin, the IP of the request can be used to identify the WordPress installation and then breach it when the password is cracked (such as from a later disclosure); or,
  • download the entire list of password hashes, uncompress the dataset and then run a search to see if your password hash is listed.

Needless to say, this conflict can seem like being placed between a security-conscious rock and an insecure hard place.

The Middle Way The Private Set Intersection Problem

Academic computer scientists have considered the problem of how two (or more) parties can validate the intersection of data (from two or more unequal sets of data either side already has) without either sharing information about what they have. Whilst this work is exciting, unfortunately these techniques are new and haven't been subject to long-term review by the cryptography community and cryptographic primatives have not been implemented in any major libraries. Additionally (but critically), PSI implementations have substantially higher overhead than our k-anonymity approach (particularly for communication[4]). Even the current academic state-of-the-art is not with acceptable performance bounds for an API service, with the communication overhead being equilivant to downloading the entire set of data.

k-Anonymity

Instead, our approach adds an additional layer of security by utilising a mathematical property known as k-Anonymity and applying it to password hashes in the form of range queries. As such, the Pwned Passwords API service never gains enough information about a non-breached password hash to be able to breach it later.

k-Anonymity is used in multiple fields to release anonymised but workable datasets; for example, so that hospitals can release patient information for medical research whilst withholding information that discloses personal information. Formally, a data set can be said to hold the property of k-anonymity, if for every record in a released table, there are k − 1 other records identical to it.

By using this property, we are able to seperate hashes into anonymised "buckets". A client is able to anonymise the user-supplied hash and then download all leaked hashes in the same anonymised "bucket" as that hash, then do an offline check to see if the user-supplied hash is in that breached bucket.

In more concrete terms:

Validating Leaked Passwords with k-Anonymity

In essense, we turn the table on password derrivation functions; instead of seeking to salt hashes to the point at which they are unique (against identical inputs), we instead introduce ambiguity into what the client is requesting.

Given hashes are essentially fixed-length hexadecimal values, we are able to simply truncate them, instead of having to resort to a decision tree structure to filter down the data. This does mean buckets are of unequal sizes but allows for clients to query in a single API request.

This approach can be implemented in a trivial way. Suppose a user enters the password test into a login form and the service they’re logging into is programmed to validate whether their password is in a database of leaked password hashes. Firstly the client will generate a hash (in our example using SHA-1) of a94a8fe5ccb19ba61c4c0873d391e987982fbbd3. The client will then truncate the hash to a predetermined number of characters (for example, 5) resulting in a Hash Prefix of a94a8. This Hash Prefix is then used to query the remote database for all hashes starting with that prefix (for example, by making a HTTP request to example.com/a94a8.txt). The entire hash list is then downloaded and each downloaded hash is then compared to see if any match the locally generated hash. If so, the password is known to have been leaked.

As this can easily be implemented over HTTP, client side caching can easily be used for performance purposes; the API is simple enough for developers to implement with little pain.

Below is a simple Bash implementation of how the Pwned Passwords API can be queried using range queries (Gist):

#!/bin/bash echo -n Password: read -s password echo hash="$(echo -n $password | openssl sha1)" upperCase="$(echo $hash | tr '[a-z]' '[A-Z]')" prefix="${upperCase:0:5}" response=$(curl -s https://api.pwnedpasswords.com/range/$prefix) while read -r line; do lineOriginal="$prefix$line" if [ "${lineOriginal:0:40}" == "$upperCase" ]; then echo "Password breached." exit 1 fi done <<< "$response" echo "Password not found in breached database." exit 0 Implementation

Hashes (even in unsalted form) have two useful properties that are useful in anonymising data.

Firstly, the Avalanche Effect means that a small change in a hash results in a very different output; this means that you can't infer the contents of one hash from another hash. This is true even in truncated form.

For example; the Hash Prefix 21BD1 contains 475 seemingly unrelated passwords, including:

lauragpe alexguo029 BDnd9102 melobie quvekyny

Further, hashes are fairly uniformally distributed. If we were to count the original 320 million leaked passwords (in Troy's dataset) by the first hexadecimal charectar of the hash, the difference between the hashes associated to the largest and the smallest Hash Prefix is ≈ 1%. The chart below shows hash count by their first hexadecimal digit:

Validating Leaked Passwords with k-Anonymity

Algorithm 1 provides us a simple check to discover how much we should truncate hashes by to ensure every "bucket" has more than one hash in it. This requires every hash to be sorted by hexadecimal value. This algorithm, including an initial merge sort, runs in roughly O(n log n + n) time (worst-case):

Validating Leaked Passwords with k-Anonymity

After identifying the Maximum Hash Prefix length, it is fairly easy to seperate the hashes into seperate buckets, as described in Algorithm 3:

Validating Leaked Passwords with k-Anonymity

This implementation was originally evaluated on a dataset of over 320 million breached passwords and we find the Maximum Prefix Length that all hashes can be truncated to, whilst maintaining the property k-anonymity, is 5 characters. When hashes are grouped together by a Hash Prefix of 5 characters, we find the median amount of hashes associated to a Hash Prefix is 305. With the range of response sizes for a query varying from 8.6KB to 16.8KB (a median of 12.2KB), the dataset is usable in many practical scenarios and is certainly a good response size for an API client.

On the new Pwned Password dataset (with over half a billion) passwords and whilst keeping the Hash Prefix length 5; the average number of hashes returned is 478 - with the smallest being 381 (E0812 and E613D) and the largest Hash Prefix being 584 (00000 and 4A4E8).

Splitting the hashes into buckets by a Hash Prefix of 5 would mean a maximum of 16^5 = 1,048,576 buckets would be utilised (for SHA-1), assuming that every possible Hash Prefix would contain at least one hash. In the datasets we found this to be the case and the amount of distinct Hash Prefix values was equal to the highest possible quantity of buckets. Whilst for secure hashing algorithms it is computationally inefficient to invert the hash function, it is worth noting that as the length of a SHA-1 hash is a total of 40 hexadecimal characters long and 5 characters is utilised by the Hash Prefix, the total number of possible hashes associated to a Hash Prefix is 16^{45} ≈ 8.40×10^{52}.

Important Caveats

It is important to note that where a user's password is already breached, an API call for a specific range of breached passwords can reduce the search candidates used in a brute-force attack. Whilst users with existing breached passwords are already vulnerable to brute-force attacks, searching for a specific range can help reduce the amount of search candidates - although the API service would have no way of determining if the client was or was not searching for a password that was breached. Using a deterministic algorithm to run queries for other Hash Prefixes can help reduce this risk.

One reason this is important is that this implementation does not currently guarantee l-diversity, meaning a bucket may contain a hash which is of substantially higher use than others. In the future we hope to use percentile-based usage information from the original breached data to better guarantee this property.

For general users, Pwned Passwords is usually exposed via web interface, it uses a JavaScript client to run this process; if the origin web server was hijacked to change the JavaScript being returned, this computation could be removed (and the password could be sent to the hijacked origin server). Whilst JavaScript requests are somewhat transparent to the client (in the case of a developer), this may not be dependended on and for technical users, non-web client based requests are preferable.

The original use-case for this service was to be deployed privately in a Cloudflare data centre where our services can use it to enhance user security, and use range queries to compliment the existing transport security used. Depending on your risks, it's safer to deploy this service yourself (in your own data centre) and use the k-anonymity approach to validate passwords where services do not themselves have the resources to store an entire database of leaked password hashes.

I would strongly recommend against storing the range queries used by users of your service, but if you do for whatever reason, store them as aggregate analytics such that they cannot be identified back to any given user's password.

Final Thoughts

Going forward, as we test this technology more, Cloudflare is looking into how we can use a private deployment of this service to better offer security functionality, both for log-in requests to our dashboard and for customers who want to prevent against Credential stuffing on their own websites using our edge network. We also seek to consider how we can incorporate recent work on the Private Set Interesection Problem alongside considering l-diversity for additional security guarantees. As always; we'll keep you updated right here, on our blog.

  1. Campbell, J., Ma, W. and Kleeman, D., 2011. Impact of restrictive composition policy on user password choices. Behaviour & Information Technology, 30(3), pp.379-388. ↩︎

  2. Grassi, P. A., Fenton, J. L., Newton, E. M., Perlner, R. A., Regenscheid, A. R., Burr, W. E., Richer, J. P., Lefkovitz, N. B., Danker, J. M., Choong, Y.-Y., Greene, K. K., and Theofanos, M. F. (2017). NIST Special Publication 800-63B Digital Identity Guidelines, chapter Authentication and Lifecycle Management. National Institute of Standards and Technology, U.S. Department of Commerce. ↩︎

  3. Jenkins, Jeffrey L., Mark Grimes, Jeffrey Gainer Proudfoot, and Paul Benjamin Lowry. "Improving password cybersecurity through inexpensive and minimally invasive means: Detecting and deterring password reuse through keystroke-dynamics monitoring and just-in-time fear appeals." Information Technology for Development 20, no. 2 (2014): 196-213. ↩︎

  4. De Cristofaro, E., Gasti, P. and Tsudik, G., 2012, December. Fast and private computation of cardinality of set intersection and union. In International Conference on Cryptology and Network Security (pp. 218-231). Springer, Berlin, Heidelberg. ↩︎

Categories: Technology

How Developers got Password Security so Wrong

CloudFlare - Wed, 21/02/2018 - 19:00
How Developers got Password Security so Wrong

How Developers got Password Security so Wrong

Both in our real lives, and online, there are times where we need to authenticate ourselves - where we need to confirm we are who we say we are. This can be done using three things:

  • Something you know
  • Something you have
  • Something you are

Passwords are an example of something you know; they were introduced in 1961 for computer authentication for a time-share computer in MIT. Shortly afterwards, a PhD researcher breached this system (by being able to simply download a list of unencrypted passwords) and used the time allocated to others on the computer.

As time has gone on; developers have continued to store passwords insecurely, and users have continued to set them weakly. Despite this, no viable alternative has been created for password security. To date, no system has been created that retains all the benefits that passwords offer as researchers have rarely considered real world constraints[1]. For example; when using fingerprints for authentication, engineers often forget that there is a sizable percentage of the population that do not have usable fingerprints or hardware upgrade costs.

Cracking Passwords

In the 1970s, people started thinking about how to better store passwords and cryptographic hashing started to emerge.

Cryptographic hashes work like trapdoors; whilst it's easy to hash a password, it's far harder to turn that "hash" back into the original output (or computationally difficult for an ideal hashing algorithm). They are used in a lot of things from speeding up searching from files, to the One Time Password generators in banks.

Passwords should ideally use specialised hashing functions like Argon2, BCrypt or PBKDF2, they are modified to prevent Rainbow Table attacks.

If you were to hash the password, p4$$w0rd using the SHA-1 hashing algorithm, the output would be 6c067b3288c1b5c791afa04e12fb013ed2e84d10. This output is the same every time the algorithm is run. As a result, attackers are able to create Rainbow Tables which contain the hashes of common passwords and then this information is used to break password hashes (where the password and hash is listed in a Rainbow Table).

Algorithms like BCrypt essentially salt passwords before they hash them using a random string. This random string is stored alongside the password hash and is used to help make the password harder to crack by making the output unique. The hashing process is repeated many times (defined by a difficulty variable), each time adding the random salt onto the output of the hash and rerunning the hash computation.

For example; the BCrypt hash $2a$10$N9qo8uLOickgx2ZMRZoMyeIjZAgcfl7p92ldGxad68LJZdL17lhWy starts with $2a$10$ which indicates the algorithm used is BCrypt and contains a random salt of N9qo8uLOickgx2ZMRZoMye and a resulting hash of IjZAgcfl7p92ldGxad68LJZdL17lhWy. Storing the salt allows the password hash to be regenerated identically when the input is known.

Unfortunately; salting is no longer enough, passwords can be cracked quicker and quicker using modern GPUs (specialised at doing the same task over and over). When a site suffers a security breach, users passwords can be taken offline in database dumps in order to be cracked offline.

Additionally; websites that fail to rate limit login requests or use captchas, can be challenged by Brute Force attacks. For a given user, an attacker will repeatedly try different (but common) passwords until they gain access to a given users account.

Sometimes sites will lock users out after a handful of failed login attempts, attacks can instead be targeted to move on quickly to a new account after the most common set of a passwords has been attempted. Lists like the following (in some cases with many, many more passwords) can be attempted to breach an account:

How Developers got Password Security so Wrong

The industry has tried to combat this problem by requiring password composition rules; requiring users comply to complex rules before setting passwords (requiring a minimum amount of numbers or punctuation symbols). Research has shown that this work hasn't helped combat the problem of password reuse, weak passwords or users putting personal information in passwords.

Credential Stuffing

Whilst it may seem that this is only a bad sign for websites that store passwords weakly, Credential Stuffing makes this problem even worse.

It is common for users to reuse passwords from site to site, meaning a username and password from a compromised website can be used to breach far more important information - like online banking gateways or government logins. When a password is reused - it takes just one website being breached, to gain access to others that a users has credentials for.

How Developers got Password Security so Wrong This Is Not Fine - The Nib

Fixing Passwords

There are fundamentally three things that need to be done to fix this problem:

  • Good UX to improve User Decisions
  • Improve Developer Education
  • Eliminating reuse of breached passwords
How Can I Secure Myself (or my Users)?

Before discussing the things we're doing, I wanted to briefly discuss what you can do to help protect yourself now. For most users, there are three steps you can immediately take to help yourself.

Use a Password Manager (like 1Password or LastPass) to set random, unique passwords for every site. Additionally, look to enable Two-Factor Authentication where possible; this uses something you have, in addition to the password you know, to validate you. This will mean, alongside your password, you have to enter a short-lived password from a device like your phone before being able to login to any site.

Two-Factor Authentication is supported on many of the worlds most popular social media, banking and shopping sites. You can find out how to enable it on popular websites at turnon2fa.com. If you are a developer, you should take efforts to ensure you support Two Factor Authentication.

Set a secure memorable password for your password manager; and yes, turn on Two-Factor Authentication for it (and keep your backup codes safe). You can find additional security tips (including tips on how to create a secure master password) in my blog post: Simple Cyber Security Tips.

Developers should look to abolish bad practice composition rules (and simplify them as much as possible). Password expiration policies do more harm than good, so seek to do away with them. For further information refer to the blog post by the UK's National Cyber Security Centre: The problems with forcing regular password expiry.

Finally; Troy Hunt has an excellent blog post on passwords for users and developers alike: Passwords Evolved: Authentication Guidance for the Modern Era

Improving Developer Education

Developers should seek to build a culture of security in the organisations where they work; try and talk about security, talk about the benefits of challenging malicious login requests and talk about password hashing in simple terms.

If you're working on an open-source project that handles authentication; expose easy password hashing APIs - for example the password_hash, password_​needs_​rehash & password_verify functions in modern PHP versions.

Eliminating Password Reuse

We know that complex password composition rules are largely ineffective, and recent guidance has followed suit. A better alternative to composition rules is to block users from signing up with passwords which are known to have been breached. Under recent NIST guidance, it is a requirement, when storing or updating passwords, to ensure they do not contain values which are commonly used, expected or compromised[2].

This is easier said than done, the recent version of Troy Hunt's Pwned Passwords database contains over half a billion passwords (over 30 GB uncompressed). Whilst developers can use API services to check if a password is reused, this requires either sending the raw password, or the password in an unsalted hash. This can be especially problematic when multiple services handle authentication in a business, and each has to store a large quantity of passwords.

This is a problem I've started looking into recently; as part of our contribution to Troy Hunt's Pwned Passwords database, I have designed a range search API that allows developers to check if a password is reused without needing to share the password (even in hashed form) - instead only needing to send a short segment of the cryptographic hash used. You can find more information on this contribution in the post: Validating Leaked Passwords with k-Anonymity.

Version 2 of Pwned Passwords is now available - you can find more information on how it works on Troy Hunt's blog post "I've Just Launched Pwned Passwords, Version 2".

  1. Bonneau, J., Herley, C., Van Oorschot, P.C. and Stajano, F., 2012, May. The quest to replace passwords: A framework for comparative evaluation of web authentication schemes. In Security and Privacy (SP), 2012 IEEE Symposium on (pp. 553-567). IEEE. ↩︎

  2. Grassi, P. A., Fenton, J. L., Newton, E. M., Perlner, R. A., Regenscheid, A. R., Burr, W. E., Richer, J. P., Lefkovitz, N. B., Danker, J. M., Choong, Y.-Y., Greene, K. K., and Theofanos, M. F. (2017). NIST Special Publication 800-63B Digital Identity Guidelines, chapter Authentication and Lifecycle Management. National Institute of Standards and Technology, U.S. Department of Commerce. ↩︎

Categories: Technology

Drupal core - Critical - Multiple Vulnerabilities - SA-CORE-2018-001

Drupal Security - Wed, 21/02/2018 - 17:10
Project: Drupal coreVersion: 8.4.x-dev7.x-devDate: 2018-February-21Security risk: Critical 16∕25 AC:Basic/A:User/CI:Some/II:Some/E:Exploit/TD:DefaultVulnerability: Multiple Vulnerabilities Description: 

This security advisory fixes multiple vulnerabilities in both Drupal 7 and Drupal 8. See below for a list.

Comment reply form allows access to restricted content - Critical - Drupal 8

Users with permission to post comments are able to view content and comments they do not have access to, and are also able to add comments to this content.

This vulnerability is mitigated by the fact that the comment system must be enabled and the attacker must have permission to post comments.

JavaScript cross-site scripting prevention is incomplete - Critical - Drupal 7 and Drupal 8

Drupal has a Drupal.checkPlain() JavaScript function which is used to escape potentially dangerous text before outputting it to HTML. This function does not correctly handle all methods of injecting malicious HTML, leading to a cross-site scripting vulnerability under certain circumstances.

The PHP functions which Drupal provides for HTML escaping are not affected.

Private file access bypass - Moderately Critical - Drupal 7

When using Drupal's private file system, Drupal will check to make sure a user has access to a file before allowing the user to view or download it. This check fails under certain conditions in which one module is trying to grant access to the file and another is trying to deny it, leading to an access bypass vulnerability.

This vulnerability is mitigated by the fact that it only occurs for unusual site configurations.

jQuery vulnerability with untrusted domains - Moderately Critical - Drupal 7

A jQuery cross site scripting vulnerability is present when making Ajax requests to untrusted domains. This vulnerability is mitigated by the fact that it requires contributed or custom modules in order to exploit.

For Drupal 8, this vulnerability was already fixed in Drupal 8.4.0 as a side effect of upgrading Drupal core to use a newer version of jQuery. For Drupal 7, it is fixed in the current release (Drupal 7.57) for jQuery 1.4.4 (the version that ships with Drupal 7 core) as well as for other newer versions of jQuery that might be used on the site, for example using the jQuery Update module.

Language fallback can be incorrect on multilingual sites with node access restrictions - Moderately Critical - Drupal 8

When using node access controls with a multilingual site, Drupal marks the untranslated version of a node as the default fallback for access queries. This fallback is used for languages that do not yet have a translated version of the created node. This can result in an access bypass vulnerability.

This issue is mitigated by the fact that it only applies to sites that a) use the Content Translation module; and b) use a node access module such as Domain Access which implement hook_node_access_records().

Note that the update will mark the node access tables as needing a rebuild, which will take a long time on sites with a large number of nodes.

Settings Tray access bypass - Moderately Critical - Drupal 8

The Settings Tray module has a vulnerability that allows users to update certain data that they do not have the permissions for.

If you have implemented a Settings Tray form in contrib or a custom module, the correct access checks should be added. This release fixes the only two implementations in core, but does not harden against other such bypasses.

This vulnerability can be mitigated by disabling the Settings Tray module.

External link injection on 404 pages when linking to the current page - Less Critical - Drupal 7

Drupal core has an external link injection vulnerability when the language switcher block is used. A similar vulnerability exists in various custom and contributed modules. This vulnerability could allow an attacker to trick users into unwillingly navigating to an external site.

Solution: 

Install the latest version:

Reported By: 
  • Comment reply form allows access to restricted content - Critical - Drupal 8
  • JavaScript cross-site scripting prevention is incomplete - Critical - Drupal 7 and Drupal 8)
  • Private file access bypass - Moderately Critical - Drupal 7
  • jQuery vulnerability with untrusted domains - Moderately Critical - Drupal 7
  • Language fallback can be incorrect on multilingual sites with node access restrictions - Moderately Critical - Drupal 8
  • Settings Tray access bypass - Moderately Critical - Drupal 8
  • External link injection on 404 pages when linking to the current page - Less Critical - Drupal 7
Fixed By: 
Categories: Technology

Drupal core - Critical - Multiple Vulnerabilities - SA-CORE-2018-001

Drupal - Wed, 21/02/2018 - 17:10
Project: Drupal coreVersion: 8.4.x-dev7.x-devDate: 2018-February-21Security risk: Critical 16∕25 AC:Basic/A:User/CI:Some/II:Some/E:Exploit/TD:DefaultVulnerability: Multiple Vulnerabilities Description: 

This security advisory fixes multiple vulnerabilities in both Drupal 7 and Drupal 8. See below for a list.

Comment reply form allows access to restricted content - Critical - Drupal 8

Users with permission to post comments are able to view content and comments they do not have access to, and are also able to add comments to this content.

This vulnerability is mitigated by the fact that the comment system must be enabled and the attacker must have permission to post comments.

JavaScript cross-site scripting prevention is incomplete - Critical - Drupal 7 and Drupal 8

Drupal has a Drupal.checkPlain() JavaScript function which is used to escape potentially dangerous text before outputting it to HTML. This function does not correctly handle all methods of injecting malicious HTML, leading to a cross-site scripting vulnerability under certain circumstances.

The PHP functions which Drupal provides for HTML escaping are not affected.

Private file access bypass - Moderately Critical - Drupal 7

When using Drupal's private file system, Drupal will check to make sure a user has access to a file before allowing the user to view or download it. This check fails under certain conditions in which one module is trying to grant access to the file and another is trying to deny it, leading to an access bypass vulnerability.

This vulnerability is mitigated by the fact that it only occurs for unusual site configurations.

jQuery vulnerability with untrusted domains - Moderately Critical - Drupal 7

A jQuery cross site scripting vulnerability is present when making Ajax requests to untrusted domains. This vulnerability is mitigated by the fact that it requires contributed or custom modules in order to exploit.

For Drupal 8, this vulnerability was already fixed in Drupal 8.4.0 as a side effect of upgrading Drupal core to use a newer version of jQuery. For Drupal 7, it is fixed in the current release (Drupal 7.57) for jQuery 1.4.4 (the version that ships with Drupal 7 core) as well as for other newer versions of jQuery that might be used on the site, for example using the jQuery Update module.

Language fallback can be incorrect on multilingual sites with node access restrictions - Moderately Critical - Drupal 8

When using node access controls with a multilingual site, Drupal marks the untranslated version of a node as the default fallback for access queries. This fallback is used for languages that do not yet have a translated version of the created node. This can result in an access bypass vulnerability.

This issue is mitigated by the fact that it only applies to sites that a) use the Content Translation module; and b) use a node access module such as Domain Access which implement hook_node_records().

Note that the update will mark the node access tables as needing a rebuild, which will take a long time on sites with a large number of nodes.

Settings Tray access bypass - Moderately Critical - Drupal 8

The Settings Tray module has a vulnerability that allows users to update certain data that they do not have the permissions for.

If you have implemented a Settings Tray form in contrib or a custom module, the correct access checks should be added. This release fixes the only two implementations in core, but does not harden against other such bypasses.

This vulnerability can be mitigated by disabling the Settings Tray module.

External link injection on 404 pages when linking to the current page - Less Critical - Drupal 7

Drupal core has an external link injection vulnerability when the language switcher block is used. A similar vulnerability exists in various custom and contributed modules. This vulnerability could allow an attacker to trick users into unwillingly navigating to an external site.

Solution: 

Install the latest version:

Reported By: 
  • Comment reply form allows access to restricted content - Critical - Drupal 8
  • JavaScript cross-site scripting prevention is incomplete - Critical - Drupal 7 and Drupal 8)
  • Private file access bypass - Moderately Critical - Drupal 7
  • jQuery vulnerability with untrusted domains - Moderately Critical - Drupal 7
  • Language fallback can be incorrect on multilingual sites with node access restrictions - Moderately Critical - Drupal 8
  • Settings Tray access bypass - Moderately Critical - Drupal 8
  • External link injection on 404 pages when linking to the current page - Less Critical - Drupal 7
Fixed By: 
Categories: Technology

MySQL for MySQL Governor updated

CloudLinux - Wed, 21/02/2018 - 15:27

A new updated MySQL package for MySQL Governor is available for download from our production repository.

Changelog:

cl-MySQL

  • updated MySQL 5.6 up to version 5.6.39.

To update run:

# yum update cl-MySQL*

or

# /usr/share/lve/dbgovernor/mysqlgovernor.py --install

To install on a new server run:

# yum install governor-mysql # /usr/share/lve/dbgovernor/mysqlgovernor.py --install
Categories: Technology

Beta: MariaDB and MySQL for MySQL Governor updated

CloudLinux - Wed, 21/02/2018 - 07:41

New updated MariaDB and MySQL packages for MySQL Governor are available for download from our updates-testing repository.

Changelog:

cl-MariaDB102

  • updated up to version 10.2.13.

cl-MariaDB101

  • updated up to version 10.1.31.

cl-MySQL57

  • debuginfo package enabled.

Note. We recommend to save a database before update.

To update run:

cl-MySQL:

# yum update cl-MySQL-meta-client cl-MySQL-meta cl-MySQL-meta cl-MySQL* governor-mysql --enablerepo=cloudlinux-updates-testing #restart mysql #restart governor-mysql

cl-MariaDB:

# yum update cl-MariaDB-meta-client cl-MariaDB-meta cl-MariaDB-meta cl-MariaDB* governor-mysql --enablerepo=cloudlinux-updates-testing #restart mysql #restart governor-mysql

To install on a new server run:

# yum install governor-mysql --enablerepo=cloudlinux-updates-testing # /usr/share/lve/dbgovernor/db-select-mysql --mysql-version=[mariadb version] # /usr/share/lve/dbgovernor/mysqlgovernor.py --install-beta
Categories: Technology

ជំរាបសួរ! - Phnom Penh: Cloudflare’s 122nd Data Center

CloudFlare - Wed, 21/02/2018 - 05:04
 Cloudflare’s 122nd Data Center

 Cloudflare’s 122nd Data Center
Cloudflare is excited to turn up our newest data center in Phnom Penh, Cambodia, making over 7 million Internet properties even faster. This is our 122nd data center globally, and our 41st data center in Asia. By the end of 2018, we expect that 95% of the world's population will live in a country with a Cloudflare data center, as we grow our global network to span 200 cities.

Cambodian Internet

Home to over 16 million people, Cambodia has a relatively low base of Internet penetration (~25%) today, but is seeing an increasing number of Internet users coming online. For perspective, Cambodia has approximately the same number of Internet users as Lebanon (where we just turned up our 121st data center!) or Singapore (from where we used to serve a portion of Cambodian visitors).

In the coming weeks, we’ll further optimize our routing for Cloudflare customers and expect to see a growing number of ISPs pick up our customers’ traffic on a low latency path.

 Cloudflare’s 122nd Data Center
Latency from a Cambodian ISP (SINET) to Cloudflare customers decreases 10x

Coming up next

Next up, in fact, thousands of feet further up, we head to the mountains for Cloudflare’s 123rd data center. Following that, two upcoming Cloudflare data centers are located well south of the Equator, and a continent away.

Categories: Technology

Using Go as a scripting language in Linux

CloudFlare - Tue, 20/02/2018 - 19:49
Using Go as a scripting language in Linux

At Cloudflare we like Go. We use it in many in-house software projects as well as parts of bigger pipeline systems. But can we take Go to the next level and use it as a scripting language for our favourite operating system, Linux?
Using Go as a scripting language in Linux
gopher image CC BY 3.0 Renee French
Tux image CC0 BY OpenClipart-Vectors

Why consider Go as a scripting language

Short answer: why not? Go is relatively easy to learn, not too verbose and there is a huge ecosystem of libraries which can be reused to avoid writing all the code from scratch. Some other potential advantages it might bring:

  • Go-based build system for your Go project: go build command is mostly suitable for small, self-contained projects. More complex projects usually adopt some build system/set of scripts. Why not have these scripts written in Go then as well?
  • Easy non-privileged package management out of the box: if you want to use a third-party library in your script, you can simply go get it. And because the code will be installed in your GOPATH, getting a third-party library does not require administrative privileges on the system (unlike some other scripting languages). This is especially useful in large corporate environments.
  • Quick code prototyping on early project stages: when you're writing the first iteration of the code, it usually takes a lot of edits even to make it compile and you have to waste a lot of keystrokes on "edit->build->check" cycle. Instead you can skip the "build" part and just immediately execute your source file.
  • Strongly-typed scripting language: if you make a small typo somewhere in the middle of the script, most scripts will execute everything up to that point and fail on the typo itself. This might leave your system in an inconsistent state. With strongly-typed languages many typos can be caught at compile time, so the buggy script will not run in the first place.
Current state of Go scripting

At first glance Go scripts seem easy to implement with Unix support of shebang lines for scripts. A shebang line is the first line of the script, which starts with #! and specifies the script interpreter to be used to execute the script (for example, #!/bin/bash or #!/usr/bin/env python), so the system knows exactly how to execute the script regardless of the programming language used. And Go already supports interpreter-like invocation for .go files with go run command, so it should be just a matter of adding a proper shebang line, something like #!/usr/bin/env go run, to any .go file, setting the executable bit and we're good to go.

However, there are problems around using go run directly. This great post describes in detail all the issues around go run and potential workarounds, but the gist is:

  • go run does not properly return the script error code back to the operating system and this is important for scripts, because error codes are one of the most common ways multiple scripts interact with each other and the operating system environment.
  • you can't have a shebang line in a valid .go file, because Go does not know how to process lines starting with #. Other scripting languages do not have this problem, because for most of them # is a way to specify comments, so the final interpreter just ignores the shebang line, but Go comments start with // and go run on invocation will just produce an error like:
package main: helloscript.go:1:1: illegal character U+0023 '#'

The post describes several workarounds for above issues including using a custom wrapper program gorun as an interpreter, but all of them do not provide an ideal solution. You either:

  • have to use non-standard shebang line, which starts with //. This is technically not even a shebang line, but the way how bash shell processes executable text files, so this solution is bash specific. Also, because of the specific behaviour of go run, this line is rather complex and not obvious (see original post for examples).
  • have to use a custom wrapper program gorun in the shebang line, which works well, however, you end up with .go files, which are not compilable with standard go build command because of the illegal # character.
How Linux executes files

OK, it seems the shebang approach does not provide us with an all-rounder solution. Is there anything else we could use? Let's take a closer look how Linux kernel executes binaries in the first place. When you try to execute a binary/script (or any file for that matter which has executable bit set), your shell in the end will just use Linux execve system call passing it the filesystem path of the binary in question, command line parameters and currently defined environment variables. Then the kernel is responsible for correct parsing of the file and creating a new process with the code from the file. Most of us know that Linux (and many other Unix-like operating systems) use ELF binary format for its executables.

However, one of the core principles of Linux kernel development is to avoid "vendor/format lock-in" for any subsystem, which is part of the kernel. Therefore, Linux implements a "pluggable" system, which allows any binary format to be supported by the kernel - all you have to do is to write a correct module, which can parse the format of your choosing. And if you take a closer look at the kernel source code, you'll see that Linux supports more binary formats out of the box. For example, for the recent 4.14 Linux kernel we can see that it supports at least 7 binary formats (in-tree modules for various binary formats usually have binfmt_ prefix in their names). It is worth to note the binfmt_script module, which is responsible for parsing above mentioned shebang lines and executing scripts on the target system (not everyone knows that the shebang support is actually implemented in the kernel itself and not in the shell or other daemon/process).

Extending supported binary formats from userspace

But since we concluded that shebang is not the best option for our Go scripting, seems we need something else. Surprisingly Linux kernel already has a "something else" binary support module, which has an appropriate name binfmt_misc. The module allows an administrator to dynamically add support for various executable formats directly from userspace through a well-defined procfs interface and is well-documented.

Let's follow the documentation and try to setup a binary format description for .go files. First of all the guide tells you to mount special binfmt_misc filesystem to /proc/sys/fs/binfmt_misc. If you're using relatively recent systemd-based Linux distribution, it is highly likely the filesystem is already mounted for you, because systemd by default installs special mount and automount units for this purpose. To double-check just run:

$ mount | grep binfmt_misc systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)

Another way is to check if you have any files in /proc/sys/fs/binfmt_misc: properly mounted binfmt_misc filesystem will create at least two special files with names register and status in that directory.

Next, since we do want our .go scripts to be able to properly pass the exit code to the operating system, we need the custom gorun wrapper as our "interpreter":

$ go get github.com/erning/gorun $ sudo mv ~/go/bin/gorun /usr/local/bin/

Technically we don't need to move gorun to /usr/local/bin or any other system path as binfmt_misc requires full path to the interpreter anyway, but the system may run this executable with arbitrary privileges, so it is a good idea to limit access to the file from security perspective.

At this point let's create a simple toy Go script helloscript.go and verify we can successfully "interpret" it. The script:

package main import ( "fmt" "os" ) func main() { s := "world" if len(os.Args) > 1 { s = os.Args[1] } fmt.Printf("Hello, %v!", s) fmt.Println("") if s == "fail" { os.Exit(30) } }

Checking if parameter passing and error handling works as intended:

$ gorun helloscript.go Hello, world! $ echo $? 0 $ gorun helloscript.go gopher Hello, gopher! $ echo $? 0 $ gorun helloscript.go fail Hello, fail! $ echo $? 30

Now we need to tell binfmt_misc module how to execute our .go files with gorun. Following the documentation we need this configuration string: :golang:E::go::/usr/local/bin/gorun:OC, which basically tells the system: "if you encounter an executable file with .go extension, please, execute it with /usr/local/bin/gorun interpreter". The OC flags at the end of the string make sure, that the script will be executed according to the owner information and permission bits set on the script itself, and not the ones set on the interpreter binary. This makes Go script execution behaviour same as the rest of the executables and scripts in Linux.

Let's register our new Go script binary format:

$ echo ':golang:E::go::/usr/local/bin/gorun:OC' | sudo tee /proc/sys/fs/binfmt_misc/register :golang:E::go::/usr/local/bin/gorun:OC

If the system successfully registered the format, a new file golang should appear under /proc/sys/fs/binfmt_misc directory. Finally, we can natively execute our .go files:

$ chmod u+x helloscript.go $ ./helloscript.go Hello, world! $ ./helloscript.go gopher Hello, gopher! $ ./helloscript.go fail Hello, fail! $ echo $? 30

That's it! Now we can edit helloscript.go to our liking and see the changes will be immediately visible the next time the file is executed. Moreover, unlike the previous shebang approach, we can compile this file any time into a real executable with go build.

Whether you like Go or digging in Linux internals, we have positions for either or these and even both of them at once. Check-out our careers page.

Categories: Technology

Alt-PHP updated

CloudLinux - Tue, 20/02/2018 - 18:51

New updated Alt-PHP packages are now available for download from our production repository.

Changelog:

alt-php54-pear-1.10.5-2

alt-php55-pear-1.10.5-2

alt-php56-pear-1.10.5-2

alt-php70-pear-1.10.5-2

alt-php71-pear-1.10.5-2

  • updated to version 1.10.5.

alt-php54-pear-ext-1-11

alt-php55-pear-ext-1-11

alt-php56-pear-ext-1-11

  • fixed extensions list for CloudLinux 6.

alt-php51-pecl-ext-1-28

alt-php52-pecl-ext-1-98

alt-php53-pecl-ext-1-120

alt-php54-pecl-ext-1-109

alt-php55-pecl-ext-1-94

alt-php56-pecl-ext-1-61

  • updated extension timezonedb to 2018.3.

alt-php70-pecl-ext-1-30

alt-php71-pecl-ext-1-18

  • updated extension xdebug to 2.6.0;
  • updated extension timezonedb to 2018.3.

alt-php72-pecl-ext-1-13

  • updated extension xdebug to 2.6.0;
  • updated extension timezonedb to 2018.3;
  • added dbase 7.0.0beta1 extension.

Install command:

yum groupinstall alt-php
Categories: Technology

Live Webinar: How to Boost Web Server Security with Backup

CloudLinux - Tue, 20/02/2018 - 09:10

Join us on Tuesday, March 6th at 12 pm EST for a live webinar!

Helping our customers build a solid web server security plan is something we are very passionate about. Join this webinar to learn how to enhance your web server security with backup. Kirill Bykov, the person in charge of CloudLinux Backup for Imunify360, will go over this new product and explain how, together with core Imunify360 defense features, this comprehensive solution helps protect your servers against various threats.

As always, a Q&A session will follow the presentation.

The webinar will take place on Tuesday, March 6th at 12 pm EST / 9 am PST, and you can register for it here.

(If you can’t attend, register anyway and we’ll send you the recording after the event.)

We hope you can join us. 

 

Categories: Technology

GDPR: How small companies can get ready for it (and why you can’t just ignore it)

Postmark - Tue, 20/02/2018 - 05:00
GDPR and Postmark

I’ll start this post in the same way every blog post about GDPR starts: by saying, in capital letters, that I AM NOT A LAWYER. No seriously, I hope this post will be helpful to those who are trying to figure out what they need to do about GDPR. But you should definitely, absolutely, talk to a lawyer. There is simply no way you can do this yourself.

With that disclaimer out of the way… We’ve been working on getting ready for the EU General Data Protection Regulation (GDPR) for the past few months, and we’re finally at a point where I’m pretty confident in saying that Postmark is fully compliant and ready for when the law goes into effect on May 25, 2018. We had to make some interesting decisions and trade-offs along the way, especially since we’re a small company without legal counsel on staff. I wanted to share a few thoughts on what we learned along the way to becoming GDPR compliant, with the hope that it will help streamline the process for your business.

Side note: if you're a US company with no EU presence and you're wondering if you should care about this, the answer is YES. Even if you don’t care about it, your EU customers do, and it will affect whether they use your product. From a purely legal perspective, if you process EU citizen data, which will be [checks clipboard] pretty much every company, this law applies to you.

First, it’s important to note that this is not just another obscure privacy law that you can ignore. TechCrunch accurately describes the law as “Data protection + teeth”. Karen Cohen wrote a very succinct summary of what this means for you:

Businesses that are not compliant may get sanctioned up to 4% of the annual worldwide turnover or fined up to € 20M (the higher of the two), per infringement. If your company processes any information of EU citizens you should start paying attention.

TLDR; When you collect data linked to a citizen of the EU, they are entitled to know what data is kept, for what purpose, and for how long. Users are entitled to access (“Right To Access”), export (“Right to Data Portability”), change, and permanently delete (“Right To Be Forgotten”) all their data from your systems (read more here). They should be able to access their data as easily as they entered it in the first place.

Changes every company will need to make to become compliant with GDPR include (but are not limited to):

  • Updates to your Privacy Policy to indicate what information you store about your customers, and how you’re using it.
  • Changes to your sign-up process to ensure explicit consent is given to collect this data (no more “by clicking on this button you agree to blah blah blah” shenanigans).
  • Have a process in place to respond to DSR (Data Subject Rights) requests such as exporting or deleting customer data.
  • Make sure that appropriate data security is in place to prevent unauthorized access to customer data (GDPR calls this “Data protection by design and by default”), and make these security measures abundantly explicit. This includes binding commitments on what you’ll do if a data breach occurs. In most cases this will require you to have a Data Processing Addendum (DPA) in place with your customers. (Spoiler: this is going to be the most time-consuming and expensive part of the process.)

In our case we put together a dedicated EU Privacy Page to address each of these issues, as well as answer the most common questions our customers have been asking us. Let’s step through some of the most important aspects you’ll need to address.

Privacy Policy and explicit consent

The GDPR Alliance posted a nice overview that explains exactly what the new law requires with regard to personal information:

  • Requires that consent is given or there is a good reason to process or store personal information.
  • Gives a person a right to know what information is held about them.
  • Allows a person to request information about them is erased and that they are ‘forgotten’ — unless there is a reason not to do this — e.g. a loan account.
  • Makes sure that personal information is properly protected. New systems must have protection designed into them (“Privacy by Design”). Access to data is strictly controlled and only given when required (“Privacy by Default”).
  • If data is lost, stolen or is accessed without authority, the authorities must be notified and possibly the people whose data has been accessed may need to be notified also.
  • Data cannot be used for anything other than the reason given at the time of collection.
  • Data is securely deleted after it is no longer needed.

This will most likely result in two fairly major changes for most companies.

Privacy policy

First, you will need to adapt your Privacy Policy to explicitly indicate what data you collect about users, what you use it for, and who is allowed to access it. You also need to indicate how the data is secured, and what the process will be if a breach happens. You can see our version of these changes in our Privacy Policy here.

I want to stress this again: you will need a lawyer for this part. There is no way you can just wing these changes. The penalties for getting it wrong (or providing misinformation) are huge.

Explicit consent

Second, no more of this:

Going forward, you will have to get explicit agreement to your Terms of Service and Privacy Policy from customers. To be more specific and way more wonky… You will need to:

  • Place an unchecked check box next to the call-out line regarding the Terms of Use and Privacy Policy. Customers will need to check this box before they sign up for an account. Companies (including us) have been using just a button without an explicit checkbox. This was even recently accepted in a case involving Uber. However, that approach carries with it a lot of risk. The Uber case is an appeal and the same presentation was rejected in the previous iteration of the case.
  • Have each of "Terms of Use" and "Privacy Policy" be a hyperlink to the relevant page. Make sure the relevant page opens up in readable format and can be saved / downloaded if the customer wants.
  • Put the "Register" button right underneath the call-out line so that it is not possible to miss (see our example below).
  • Retain the following information in connection with each clickthrough so you can prove you acquired consent properly: who consented, when they consented, what they were told at the time (terms and policies they agreed to), how they consented, and whether they have withdrawn consent (and if so, when).

In short, this is the new normal for your app:

Data protection and cross-border transfers

Like many companies, we put a lot of our hope in the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks. Certification through this framework was supposed to solve all our cross-border data transfer problems. And in theory, it does. Privacy Shield is a mechanism jointly implemented by the European Commission and the US to enable companies to lawfully transfer personal data of EU residents to the US. The problem is that Privacy Shield is subject to legal challenge, with critics claiming that it does not fully protect the fundamental rights of individuals provided under EU privacy law.

So I'll tell you this. It might be the law, but I have spoken to very few EU companies who equate the legality of Privacy Shield with the “enoughness” of Privacy Shield. To put it another way: nobody trusts this thing. So even though we are certified under Privacy Shield, we realized pretty early on that we were going to lose a lot of customers if we didn’t also put something more rigorous in place.

Enter Data Processing Addendums (DPAs) and Model Clauses. The DPA offers contractual terms that meet GDPR requirements and that reflect a company’s data privacy and security commitments to their clients. And since Privacy Shield is not considered “enough” by many companies, the EU Model Clauses add standardized contractual clauses to the DPA to ensure that any personal data leaving the EEA will be transferred in compliance with EU data-protection law.

Here’s the hard truth about DPAs: they’re expensive. Right now every company is scrambling to create DPAs and be the first out of the gate to get them signed with all their customers. If you’re on the receiving end of this, you’re going to have to spend a lot of legal fees to get each DPA reviewed as it comes your way.

For a small company like ours, that is simply not possible. So we made a tough call on this one. We don’t sign other companies’ DPAs, and we don’t allow companies to make any changes to the DPA we created. We understand that we might lose a few customers because of this. But the cost of a lost customer is way less than the cost of passing every change to our lawyer and having a back-and-forth about it with the client (not to mention the cost of maintaining multiple versions of this thing). Here is how we explain this to our customers:

To ensure no inconsistent or additional terms are imposed on us beyond that reflected in our standard DPA and model clauses, we cannot agree to sign customers’ DPAs. As a small team we also can’t make individual changes to our DPA since we don't have a legal team on staff. Any changes to the standard DPA would require legal counsel and a lot of back and forth discussion that would be cost prohibitive for our team.

In short: our DPA is 100% accurate in terms of our privacy and security practices (and those practices are more than adequate to satisfy GDPR requirements). We are therefore not going to make any changes to it, because we’d be promising things we don’t do (which, you know, you definitely shouldn’t do in a legally binding contract).

So here’s my advice for small companies: Spend a lot of money upfront on a good lawyer to get a really good DPA in place. And then stick with it.

We went a step further with this. On our EU Privacy page, we allow customers to sign our DPA electronically. This has been another huge time saver for us. But not just that, it shows our commitment to Privacy, Security, Customer Success, and GDPR in particular. It might be a boring page, but it’s my favorite Postmark “feature” in a long time.

Data Subject Rights

Data Subject Rights (DSR) is a big topic in GDPR, but for most SaaS apps it will be related to two main things: the right to be forgotten (delete) and the right of data portability (export). For delete requests, all personal data must be deleted within thirty days of receipt of the request. For export requests, customers require all personal information that is held for more that forty-eight hours to be easily accessible upon request.

Since our customers typically have customers of their own, some of our customers have asked us for a new API endpoint to service delete and export requests so that they can easily fulfill DSR requests with multiple vendors. This is obviously a big investment, and for the moment we have opted not to do that. The truth is we simply don’t know how many of these request we will receive.

GDPR law states that DSR requests have to be fulfilled within 30 days of the request being received. So we are, instead, committing to our customers that we will respond to their DSR requests within 7 days of receiving the request. That should give our customers plenty of time to respond to their customers if/when they receive such requests.

Whatever way you choose to go, this is another important aspect to thing through for GDPR.

We’re in this together

The other day I tweeted that this appears to be every company trying to get ready for GDPR right now:

It’s true that we’re all stumbling around a little bit. But it’s also great to see so many companies take this law seriously — as they should. My sincere hope is that this post contributes a little bit to the discussion, and helps some of you figure out what you need to do to prepare for this law to go into effect.

Categories: Technology

DrupalCamp London

Drupal - Mon, 19/02/2018 - 19:45

The following blog was written by Drupal Association Premium Supporting Partner, DrupalCamp London.

The people surrounding Drupal have always been one of its strongest selling points; hence the motto “Come for the code, stay for the community”. We bring individuals from a multitude of backgrounds and skill sets together to push forward towards a common goal whilst supporting and helping each other. Within the community, there are a number of ways to connect to each other; both online and in person. A good way to meet in person is by attending DrupalCons and DrupalCamps.

DrupalCamps

A DrupalCamp can be similar to a DrupalCon but is on a much smaller scale. Where a ‘Con has 1,600+ attendees a ‘Camp ranges anywhere from 50-600 people. In Europe alone there were over 50 camps in 2017, including DrupalCamp London.

DrupalCamp London

DrupalCamp London brings together hundreds of people from across the globe who use, develop, design, and support the Drupal platform. It’s a chance for Drupalers from all backgrounds to meet, discuss, and engage in the Drupal community and project. DrupalCamp London is the biggest camp in Europe (followed very closely by Kiev), at ~600 people over three days. Due to its size and location, we’re able to run a wide range of sessions, keynotes, BoFs, Sprints, and activities to take part in.

What happens over the three days? Friday (CxO day)

Friday (CxO day) is primarily aimed at business leaders who provide or make use of Drupal services (i.e web development agencies, training companies, clients etc), but naturally, everyone is welcome. Throughout the day we'll have speakers talking about their experiences working with Drupal and Open Source technologies in their sector(s) or personal life. With a hot food buffet for lunch and a free drinks reception at the end of the day, you'll also have ample time to network with the other attendees.

Benefits of attending 

Benefits for CTOs, CMOs, COOs, CEOs, Technical Directors, Marketing Directors and Senior Decision Makers: 

  • Understand how leading organisations leverage the many benefits of Drupal
  • Network with similar organisations in your sector
  • Learn directly from thought leaders via specific case studies
Saturday/Sunday (Weekend event)

Over the weekend, we have 3 Keynote speakers, a choice of over 40 sessions to attend, BoF (Birds of a Feather) talks, Sprints, great lunch provided (both days) and a Saturday social. With all the activity there is something for everyone to get involved in.

Benefits of attending 

Networking 

Over 500 people attended the weekend event last year and we are expecting it to grow even more this year. Not all attendees are devs either, with a fair share of managers, designers, C-Level, and UX leads there's a great opportunity for all skill sets to interact with each other. Big brands use Drupal (MTV, Visit England, Royal.gov, Guardian, Twitter, Disney) and this is a chance to meet with people from those companies to compare notes, and learn from each other. 

Recruitment

As above, the chance to meet so many people from various skill sets is a great way to line up potential interviews and hires for any aspect of your business. At the very least you'll be able to meet interesting people for any future potential hires. 

Marketing & Raising company profile 

Attending an event with a huge turnout is a great way to meet people and talk to them about what you and your company do. Embedding your name within the tight-knit Drupal community can attract the attention of other companies. Sponsoring the camp means that your logo and additional information can be seen around the camp, in tote bags given to attendees, and online. The social and sponsors stands are the perfect chance to talk to other companies and people attending DrupalCamp, to find out how they use Drupal for their benefit. 

Learning 

DrupalCamp isn't just for Devs, over the weekend there are sessions on a broad range of topics including community & business, UX, and general site building/using Drupal. The technical topics aren’t just Drupal specific either, this gives developers (and others) the ability to learn more about general core coding concepts and methodologies. The methods and techniques learnt help with day to day development and long-term work. In addition to the planned sessions, BoF (birds of a feather) sessions, there are ad-hoc get-togethers where people can talk on any topic, allowing a free discussion to share ideas. 

Warm fuzzy feeling/giving back 

Drupal (like any open source software) wouldn't survive without the community. Camps and other events allow the members to come together and see ‘first hand’ that they’re giving back to a community that helps power their tech, maintains their interests, and enables them to make a living.

How to get involved?

It’s easy to get involved with DrupalCamp London, check us out on Twitter for updates and you can find out more about the event and buy tickets on our website.

Categories: Technology

Pages

Subscribe to oakleys.org.uk aggregator - Technology
Additional Terms