Blogroll Category: Technology

I read blogs, as well as write one. The 'blogroll' on this site reproduces some posts from some of the people I enjoy reading. There are currently 196 posts from the category 'Technology.'

Disclaimer: Reproducing an article here need not necessarily imply agreement or endorsement!

Backup Survey: What Backup Solution Do You Use?

CloudLinux - Mon, 25/09/2017 - 21:13

Today I’d like to ask for your help in this short but an important survey. We’d like to know more about your server backup solutions and preferences - it will help us better understand the needs of the hosting market and our product direction.

It will take less than a minute to complete as it contains only a few questions. This survey will not ask for any personal information. Your feedback is invaluable - please respond today.

TAKE SURVEY NOW

Thank you, 

Igor Seletskiy
CloudLinux CEO

Categories: Technology

Unmetered Mitigation: DDoS Protection Without Limits

CloudFlare - Mon, 25/09/2017 - 14:00

This is the week of Cloudflare's seventh birthday. It's become a tradition for us to announce a series of products each day of this week and bring major new benefits to our customers. We're beginning with one I'm especially proud of: Unmetered Mitigation.

CC BY-SA 2.0 image by Vassilis

Cloudflare runs one of the largest networks in the world. One of our key services is DDoS mitigation and we deflect a new DDoS attack aimed at our customers every three minutes. We do this with over 15 terabits per second of DDoS mitigation capacity. That's more than the publicly announced capacity of every other DDoS mitigation service we're aware of combined. And we're continuing to invest in our network to expand capacity at an accelerating rate.

Surge Pricing

Virtually every Cloudflare competitor will send you a bigger bill if you are unlucky enough to get targeted by an attack. We've seen examples of small businesses that survive massive attacks to then be crippled by the bills other DDoS mitigation vendors sent them. From the beginning of Cloudflare's history, it never felt right that you should have to pay more if you came under an attack. That feels barely a step above extortion.

With today’s announcement we are eliminating this industry standard of ‘surge pricing’ for DDoS attacks. Why should customers pay more just to defend themselves? Charging more when the customer is experiencing a painful attack feels wrong; just as surge pricing when it rains hurts ride-sharing customers when they need a ride the most.

End of the FINT

That said, from our early days, we would sometimes fail customers off our network if the size of an attack they received got large enough that it affected other customers. Internally, we referred to this as FINTing (for Fail INTernal) a customer.

The standards for when a customer would get FINTed were situation dependent. We had rough thresholds depending on what plan they were on, but the general rule was to keep a customer online unless the size of the attack impacted other customers. For customers on higher tiered plans, when our automated systems didn't handle the attacks themselves, our technical operations team could take manual steps to protect them.

Every morning I receive a list of all the customers that were FINTed the day before. Over the last four years the number of FINTs has dwindled. The reality is that our network today is at such a scale that we are able to mitigate even the largest DDoS attacks without it impacting other customers. This is almost always handled automatically. And, when manual intervention is required, our techops team has gotten skilled enough that it isn't overly taxing.

Aligning With Our Customers

So today, on the first day of our Birthday Week celebration, we make it official for all our customers: Cloudflare will no longer terminate customers, regardless of the size of the DDoS attacks they receive, regardless of the plan level they use. And, unlike the prevailing practice in the industry, we will never jack up your bill after the attack. Doing so, frankly, is perverse.

CC BY-SA 2.0 image by Dennis Jarvis

We call this Unmetered Mitigation. It stems from a basic idea: you shouldn't have to pay more to be protected from bullies who try and silence you online. Regardless of what Cloudflare plan you use — Free, Pro, Business, or Enterprise — we will never tell you to go away or that you need to pay us more because of the size of an attack.

Cloudflare's higher tier plans will continue to offer more sophisticated reporting, tools, and customer support to better tune our protections against whatever threats you face online. But volumetric DDoS mitigation is now officially unlimited and unmetered.

Setting the New Standard

Back in 2014, during Cloudflare's birthday week, we announced that we were making encryption free for all our customers. We did it because it was the right thing to do and we'd finally developed the technical systems we needed to do it at scale. At the time, people said we were crazy. I'm proud of the fact that, three years later, the rest of the industry has followed our lead and encryption by default has become the standard.

I'm hopeful the same will happen with DDoS mitigation. If the rest of the industry moves away from the practice of surge pricing and builds DDoS mitigation in by default then it would largely end DDoS attacks for good. We took a step down that path today and hope, like with encryption, the rest of the industry will follow.

Want to know more? Read No Scrubs: The Architecture That Made Unmetered Mitigation Possible and Meet Gatebot - a bot that allows us to sleep.

Categories: Technology

No Scrubs: The Architecture That Made Unmetered Mitigation Possible

CloudFlare - Mon, 25/09/2017 - 14:00

When building a DDoS mitigation service it’s incredibly tempting to think that the solution is scrubbing centers or scrubbing servers. I, too, thought that was a good idea in the beginning, but experience has shown that there are serious pitfalls to this approach.

A scrubbing server is a dedicated machine that receives all network traffic destined for an IP address and attempts to filter good traffic from bad. Ideally, the scrubbing server will only forward non-DDoS packets to the Internet application being attacked. A scrubbing center is a dedicated location filled with scrubbing servers.

Three Problems With Scrubbers

The three most pressing problems with scrubbing are: bandwidth, cost, knowledge.

The bandwidth problem is easy to see. As DDoS attacks have scaled to >1Tbps having that much network capacity available is problematic. Provisioning and maintaining multiple-Tbps of bandwidth for DDoS mitigation is expensive and complicated. And it needs to be located in the right place on the Internet to receive and absorb an attack. If it’s not then attack traffic will need to be received at one location, scrubbed, and then clean traffic forwarded to the real server: that can introduce enormous delays with a limited number of locations.

Imagine for a moment you’ve built a small number of scrubbing centers, and each center is connected to the Internet with many Gbps of connectivity. When a DDoS attack occurs that center needs to be able to handle potentially 100s of Gbps of attack traffic at line rate. That means exotic network and server hardware. Everything from the line cards in routers, to the network adapter cards in the servers, to the servers themselves is going to be very expensive.

This (and bandwidth above) is one of the reasons DDoS mitigation has traditionally cost so much and been billed by attack size.

The final problem, knowledge, is the most easily overlooked. When you set out to build a scrubbing server you are building something that has to separate good packets from bad.

At first this seems easy (let’s filter out all TCP ACK packets for non-established connections, for example), and low level engineers are easy to excite about writing high-performance code to do that. But attackers are not stupid and they’ll throw legitimate looking traffic at a scrubbing server and it gets harder and harder to distinguish good from bad.

At that point, scrubbing engineers need to become protocol experts at all levels of the stack. That means you have to build a competency in all levels of TCP/IP, DNS, HTTP, TLS, etc. And that’s hard.

CC BY-SA 2.0 image by Lisa Stevens

The bottom line is scrubbing centers and exotic hardware are great marketing. But, like citadels of medieval times, they are monumentally expensive and outdated, overwhelmed by better weapons and warfighting techniques.

And many DDoS mitigation services that use scrubbing centers operate in an offline mode. They are only enabled when a DDoS occurs. This typically means that an Internet application will succumb to the DDoS attack before its traffic is diverted to the scrubbing center.

Just imagine citizens fleeing to hide behind the walls of the citadel under fire from an approaching army.

Better, Cheaper, Smarter

There’s a subtler point about not having dedicated scrubbers: it forces us to build better software. If a scrubbing server becomes overwhelmed or fails then only the customer being scrubbed is affected, but when the mitigation happens on the very servers running the core service it has to work and be effective.

I spoke above about the ‘knowledge gap’ that comes about with dedicated DDoS scrubbing. The Cloudflare approach means that if bad traffic gets through, say a flood of bad DNS packets, then it reaches a service owned and operated by people who are experts in that domain. If a DNS flood gets through our DDoS protection it hits our custom DNS server, RRDNS, the engineers who work on it can bring their expertise to bear.

This makes an enormous difference because the result is either improved DDoS scrubbing or a change to the software (e.g. the DNS stack) that improves its performance under load. We’ve lived that story many, many times and the entire software stack has improved because of it.

The approach Cloudflare took to DDoS mitigation is rather simple: make every single server in Cloudflare participate in mitigation, load balance DDoS attacks across the data centers and servers within them and then apply smarts to the handling of packets. These are the same servers, processors and cores handling our entire service.

Eliminating scrubbing centers and hardware completely changes the cost of building a DDoS mitigation service.

We currently have around 15 Tbps of network capacity worldwide but this capacity doesn’t require exotic network hardware. We are able to use low cost or commodity networking equipment bound together using network automation to handle normal and DDoS traffic. Just as Google originally built its service by writing software that tied together commodity servers into a super (search) computer; our architecture binds commodity servers together into one giant network device.

By building the world’s most peered network we’ve built this capacity at reasonable cost and more importantly are able to handle attack traffic globally wherever it originates with low latency links. No scrubbing solution is able to say the same.

And because Cloudflare manages DNS for our customers and uses an Anycasted network attack traffic originating from botnets is automatically distributed across our global network. Each data center deals with a portion of DDoS traffic.

Within each data center DDoS traffic is load balanced across multiple servers running our service. Each server handles a portion of the DDoS traffic. This spreading of DDoS traffic means that a single DDoS attack will be handled by a large number of individual servers across the world.

And as Cloudflare grows our DDoS mitigation capacity grows automatically, and because our DDoS mitigation is built into our stack it is always on. We mitigate a new DDoS attack every three minutes with no downtime for Internet applications and have no need to ‘switch over’ to a scrubbing center.

Inside a Server

Once all this global and local load balancing has occurred packets do finally hit a network adapter card in a server. It’s here that Cloudflare’s custom DDoS mitigation stack comes into play.

Over the years we’ve learned how to automatically detect and mitigate anything the internet can throw at us. For most of the attacks, we rely on dynamically managing iptables: the standard Linux firewall. We’ve spoked about the most effective techniques in past. iptables has a number of very powerful features which we select depending on specific attack vector. From our experience xt_bpf, ipset, hashlimits and connlimits are the most useful iptables modules.

For very large attacks the Linux Kernel is not fast enough though. To relieve the kernel from processing excessive number of packets, we experimented with various kernel bypass techniques. We’ve settled on a partial kernel bypass interface - Solarflare specific EFVI.

With EFVI we can offload the processing of our firewall rules to a user space program, and we can easily process millions of packets per second on each server, while keeping the CPU usage low. This allows us to withstand the largest attacks, without affecting our multi-tenant service.

Open Source

Cloudflare’s vision is to help to build a better internet. Fixing DDoS is a part of it. While we can’t really help with the bandwidth, and cost, needed to operate on the internet, we can, and are, helping with the knowledge gap. We’ve been relentlessly documenting the most important and dangerous attacks we’ve encountered, fighting botnets and open sourcing critical pieces of our DDoS infrastructure.

We’ve open sourced various tools, from the very low level projects like our BPF Tools, that we use to fight DNS and SYN floods, to contributing to OpenResty a performant application framework on top of NGINX, which is great for building L7 defenses.

Further Reading

Cloudflare has written a great deal about DDoS mitigation in the past. Some example, blog posts: How Cloudflare's Architecture Allows Us to Scale to Stop the Largest Attacks, Reflections on reflection (attacks), The Daily DDoS: Ten Days of Massive Attacks, and The Internet is Hostile: Building a More Resilient Network.

And if you want to go deeper, my colleague Marek Majkowski dives deeper into the code we use DDoS mitigation.

Conclusion

Cloudflare’s DDoS mitigation architecture and custom software makes Unmetered Mitigation possible. With it we can withstand the largest DDoS attacks and as our network grows our DDoS mitigation capability grows with it.

Categories: Technology

Meet Gatebot - a bot that allows us to sleep

CloudFlare - Mon, 25/09/2017 - 14:00

In the past, we’ve spoken about how Cloudflare is architected to sustain the largest DDoS attacks. During traffic surges we spread the traffic across a very large number of edge servers. This architecture allows us to avoid having a single choke point because the traffic gets distributed externally across multiple datacenters and internally across multiple servers. We do that by employing Anycast and ECMP.

We don't use separate scrubbing boxes or specialized hardware - every one of our edge servers can perform advanced traffic filtering if the need arises. This allows us to scale up our DDoS capacity as we grow. Each of the new servers we add to our datacenters increases our maximum theoretical DDoS “scrubbing” power. It also scales down nicely - in smaller datacenters we don't have to overinvest in expensive dedicated hardware.

During normal operations our attitude to attacks is rather pragmatic. Since the inbound traffic is distributed across hundreds of servers we can survive periodic spikes and small attacks without doing anything. Vanilla Linux is remarkably resilient against unexpected network events. This is especially true since kernel 4.4 when the performance of SYN cookies was greatly improved.

But at some point, malicious traffic volume can become so large that we must take the load off the networking stack. We have to minimize the amount of CPU spent on dealing with attack packets. Cloudflare operates a multi-tenant service and we must always have enough processing power to serve valid traffic. We can't afford to starve our HTTP proxy (nginx) or custom DNS server (named RRDNS, written in Go) of CPU. When the attack size crosses a predefined threshold (which varies greatly depending on specific attack type), we must intervene.

Mitigations

During large attacks we deploy mitigations to reduce the CPU consumed by malicious traffic. We have multiple layers of defense, each tuned to specific attack vector.

First, there is “scattering”. Since we control DNS resolution we are able to move the domains we serve between IP addresses (we call this "scattering"). This is an effective technique as long as the attacks don’t follow the updated DNS resolutions. This often happens for L3 attacks where the attacker has hardcoded the IP address of the target.

Next, there is a wide range of mitigation techniques that leverage iptables, the firewall built in to the Linux kernel. But we don't treat use it like a conventional firewall, with a static set of rules. We continuously add, tweak and remove rules, based on specific attack characteristics. Over the years we have mastered the most effective iptables extensions:

  • xt_bpf
  • ipsets
  • hashlimits
  • connlimit

To make the most of iptables, we built a system to manage the iptables configuration across our entire fleet, allowing us to rapidly deploy rules everywhere. This fits our architecture nicely: due to Anycast, an attack against a single IP will be delivered to multiple locations. Running iptables rules for that IP on all servers makes sense.

Using stock iptables gives us plenty of confidence. When possible we prefer to use off-the-shelf tools to deal with attacks.

Sometimes though, even this is not sufficient. Iptables is fast in the general case, but has its limits. During very large attacks, exceeding 1M packets per second per server, we shift the attack traffic from kernel iptables to a kernel bypass user space program (which we call floodgate). We use a partial kernel bypass solution using Solarflare EF_VI interface. With this on each server we can process more than 5M attack packets per second while consuming only a single CPU core. With floodgate we have comfortable amount of CPU left for our applications, even during the largest network events.

Finally, there are a number of tweaks we can make on at the HTTP layer. For specific attacks we disable HTTP Keep-Alives forcing attackers to re-establish TCP sessions for each request. This sacrifices a bit of performance for valid traffic as well, but is a surprisingly powerful tool throttling many attacks. For other attack patterns we turn the “I’m under attack” mode on, forcing the attack to hit our JavaScript challenge page.

Manual attack handling

Early on these mitigations were applied manually by our tireless SREs. Unfortunately, it turns out that humans under stress... well, make mistakes. We learned it the hard way - one of the most famous incidents happened in March 2013 when a simple typo brought our whole network down.

Humans are also not great at applying precise rules. As our systems grew and mitigations became more complex, having many specific toggles, our SREs got overwhelmed by the details. It was challenging to present all the specific information about the attack to the operator. We often applied overly-broad mitigations, which were unnecessarily affecting legitimate traffic. All that changed with the introduction of Gatebot.

Meet Gatebot

To aid our SREs we developed a fully automatic mitigation system. We call it Gatebot1.

The main goal of Gatebot was to automate as much of the mitigation workflow as possible. That means: to observe the network and note the anomalies, understand the targets of attacks and their metadata (such as the type of customer involved), and perform appropriate mitigation action.

Nowadays we have multiple Gatebot instances - we call it them “mitigation pipelines”. Each pipeline has three parts:

1) “attack detection” or “signal” - A dedicated system detects anomalies in network traffic. This is usually done by sampling a small fraction of the network packets hitting our network, and analyzing them using streaming algorithms. With this we have a real-time view of the current status of the network. This part of the stack is written in Golang, and even though it only examines the sampled packets, it's pretty CPU intensive. It might comfort you to know that at this very moment two big Xeon servers burn all of their combined 48 Skylake CPU cores toiling away counting packets and performing sophisticated analytics looking for attacks.

2) “reactive automation” or “business logic”. For each anomaly (attack) we see who the target is, can we mitigate it, and with what parameters. Depending on the specific pipeline, the business logic may be anything from a trivial procedure to a multi-step process requiring a number of database lookups and potentially confirmation from a human operator. This code is not performance critical and is written in Python. To make it more accessible and readable by others in company, we developed a simple functional, reactive programming engine. It helps us to keep the code clean and understandable, even as we add more steps, more pipelines and more complex logic. To give you a flavor of the complexity: imagine how the system should behave if a customer upgraded a plan during an attack.

3) “mitigation”. The previous step feeds specific mitigation instructions into the centralized mitigation management systems. The mitigations are deployed across the world to our servers, applications, customer settings and, in some cases, to the network hardware.

Sleeping at night

Gatebot operates constantly, without breaks for lunch. For the iptables mitigations pipelines alone, Gatebot got engaged between 30 and 1500 times a day. Here is a chart of mitigations per day over last 6 months:

Gatebot is much faster and much more precise than even our most experienced SREs. Without Gatebot we wouldn’t be able to operate our service with the appropriate level of confidence. Furthermore, Gatebot has proved to be remarkably adaptable - we started by automating handling of Layer 3 attacks, but soon we proved that the general model works well for automating other things. Today we have more than 10 separate Gatebot instances doing everything from mitigating Layer 7 attacks to informing our Customer Support team of misbehaving customer origin servers.

Since Gatebot’s inception we learned greatly from the "detection / logic / mitigation" workflow. We reused this model in our Automatic Network System which is used to relieve network congestion2.

Gatebot allows us to protect our users no matter of the plan. Whether you are a on a Free, Pro, Business or Enterprise plan, Gatebot is working for you. This is why we can afford to provide the same level of DDoS protection for all our customers3.

Dealing with attacks sounds interesting? Join our world famous DDoS team in London, Austin, San Francisco and our elite office in Warsaw, Poland.

  1. Fun fact: all our components in this area are called “gate-something”, like: gatekeeper, gatesetter, floodgate, gatewatcher, gateman... Who said that naming things must be hard?

  2. Some of us have argued that this system should be called Netbot.

  3. Note: there are caveats. Ask your Success Engineer for specifics!

Categories: Technology

Alt-Ruby updated

CloudLinux - Mon, 25/09/2017 - 13:47

The new updated Alt-Ruby packages are available for download from our production repository.

Changelog:

alt-ruby22-2.2.8-24

alt-ruby23-2.3.5-16

alt-ruby24-2.4.2-4

  • CVE-2017-0898: buffer underrun vulnerability in Kernel.sprintf;
  • CVE-2017-10784: escape sequence injection vulnerability in the Basic authentication of WEBrick;
  • CVE-2017-14033: buffer underrun vulnerability in OpenSSL ASN1 decode;
  • CVE-2017-14064: heap exposure vulnerability in generating JSON.
  • fixed multiple vulnerabilities in RubyGems;
  • updated bundled libyaml to version 0.1.7.

Install command:

yum groupinstall alt-ruby
Categories: Technology

The History of Email

CloudFlare - Sat, 23/09/2017 - 17:00
The History of Email

This was adapted from a post which originally appeared on the Eager blog. Eager has now become the new Cloudflare Apps.

QWERTYUIOP

— Text of the first email ever sent, 1971

The ARPANET (a precursor to the Internet) was created “to help maintain U.S. technological superiority and guard against unforeseen technological advances by potential adversaries,” in other words, to avert the next Sputnik. Its purpose was to allow scientists to share the products of their work and to make it more likely that the work of any one team could potentially be somewhat usable by others. One thing which was not considered particularly valuable was allowing these scientists to communicate using this network. People were already perfectly capable of communicating by phone, letter, and in-person meeting. The purpose of a computer was to do massive computation, to augment our memories and empower our minds.

Surely we didn’t need a computer, this behemoth of technology and innovation, just to talk to each other.

The History of Email The computers which sent (and received) the first email.

The history of computing moves from massive data processing mainframes, to time sharing where many people share one computer, to the diverse collection of personal computing devices we have today. Messaging was first born in the time sharing era, when users wanted the ability to message other users of the same time shared computer.

Unix machines have a command called write which can be used to send messages to other currently logged-in users. For example, if I want to ask Mark out to lunch:

$ write mark write: mark is logged in more than once; writing to ttys002 Hi, wanna grab lunch?

He will see:

Message from zack@Awesome-Mainframe.local on ttys003 at 10:36 ... Hi, wanna grab lunch?

This is absolutely hilarious if your coworker happens to be using a graphical tool like vim which will not take kindly to random output on the screen.

Persistant Messages

When the mail was being developed, nobody thought at the beginning it was going to be the smash hit that it was. People liked it, they thought it was nice, but nobody imagined it was going to be the explosion of excitement and interest that it became. So it was a surprise to everybody, that it was a big hit.

— Frank Heart, director of the ARPANET infrastructure team

An early alternative to Unix called Tenex took this capability one step further. Tenex included the ability to send a message to another user by writing onto the end of a file which only they could read. This is conceptually very simple, you could implement it yourself by creating a file in everyones home directory which only they can read:

mkdir ~/messages chmod 0442 ~/messages

Anyone who wants to send a message just has to append to the file:

echo "?????\n" >> /Users/zack/messages

This is, of course, not a great system because anyone could delete your messages! I trust the Tenex implementation (called SNDMSG) was a bit more secure.

ARPANET

In 1971, the Tenex team had just gotten access to the ARPANET, the network of computers which was a main precursor to the Internet. The team quickly created a program called CPYNET which could be used to send files to remote computers, similar to FTP today.

One of these engineers, Ray Tomlinson, had the idea to combine the message files with CPYNET. He added a command which allowed you to append to a file. He also wired things up such that you could add an @ symbol and a remote machine name to your messages and the machine would automatically connect to that host and append to the right file. In other words, running:

SNDMSG zack@cloudflare

Would append to the /Users/zack/messages file on the host cloudflare. And email was born!

FTP

The CPYNET format did not have much of a life outside of Tenex unfortunately. It was necessary to create a standard method of communication which every system could understand. Fortunately, this was also the goal of another similar protocol, FTP. FTP (the File Transfer Protocol) sought to create a single way by which different machines could transfer files over the ARPANET.

FTP originally didn’t include support for email. Around the time it was updated to use TCP (rather than the NCP protocol which ARPANET historically used) the MAIL command was added.

$ ftp < open bbn > 220 HELLO, this is the BBN mail service < MAIL zack > 354 Type mail, ended by <CRLF>.<CRLF> < Sup? < . > 250 Mail stored

These commands were ultimately borrowed from FTP and formed the basis for the SMTP (Simple Mail Transfer Protocol) protocol in 1982.

Mailboxes

The format for defining how a message should be transmitted (and often how it would be stored on disk) was first standardized in 1977:

Date : 27 Aug 1976 0932-PDT From : Ken Davis <KDavis at Other-Host> Subject : Re: The Syntax in the RFC To : George Jones <Group at Host>, Al Neuman at Mad-Host There’s no way this is ever going anywhere...

Note that at this time the ‘at’ word could be used rather than the ‘@’ symbol. Also note that this use of headers before the message predates HTTP by almost fifteen years. This format remains nearly identical today.

The Fifth Edition of Unix used a very similar format for storing a users email messages on disk. Each user would have a file which contained their messages:

From MAILER-DAEMON Fri Jul 8 12:08:34 1974 From: Author <author@example.com> To: Recipient <recipient@example.com> Subject: Save $100 on floppy disks They’re never gonna go out of style! From MAILER-DAEMON Fri Jul 8 12:08:34 1974 From: Author <author@example.com> To: Recipient <recipient@example.com> Subject: Seriously, buy AAPL You’ve never heard of it, you’ve never heard of me, but when you see that stock symbol appear. Buy it. - The Future

Each message began with the word ‘From’, meaning if a message happened to contain From at the beginning of a line it needed to be escaped lest the system think that’s the start of a new message:

From MAILER-DAEMON Fri Jul 8 12:08:34 2011 From: Author <author@example.com> To: Recipient <recipient@example.com> Subject: Sample message 1 This is the body. >From (should be escaped). There are 3 lines.

It was technically possible to interact with your email by simply editing your mailbox file, but it was much more common to use an email client. As you might expect there was a diversity of clients available, but a few are of historical note.

RD was an editor which was created by Lawrence Roberts who was actually the program manager for the ARPANET itself at the time. It was a set of macros on top of the Tenex text editor (TECO), which itself would later become Emacs.

RD was the first client to give us the ability to sort messages, save messages, and delete them. There was one key thing missing though: any integration between receiving a message and sending one. RD was strictly for consuming emails you had received, to reply to a message it was necessary to compose an entirely new message in SNDMSG or another tool.

That innovation came from MSG, which itself was an improvement on a client with the hilarious name BANANARD. MSG added the ability to reply to a message, in the words of Dave Crocker:

My subjective sense was that propagation of MSG resulted in an exponential explosion of email use, over roughly a 6-month period. The simplistic explanation is that people could now close the Shannon-Weaver communication loop with a single, simple command, rather than having to formulate each new message. In other words, email moved from the sending of independent messages into having a conversation.

Email wasn’t just allowing people to talk more easily, it was changing how they talk. In the words of C. R. Linklider and Albert Vezza in 1978:

One of the advantages of the message systems over letter mail was that, in an ARPANET message, one could write tersely and type imperfectly, even to an older person in a superior position and even to a person one did not know very well, and the recipient took no offense... Among the advantages of the network message services over the telephone were the fact that one could proceed immediately to the point without having to engage in small talk first, that the message services produced a preservable record, and that the sender and receiver did not have to be available at the same time.

The most popular client from this era was called MH and was composed of several command line utilities for doing various actions with and to your email.

$ mh % show (Message inbox:1) Return-Path: joed Received: by mysun.xyz.edu (5.54/ACS) id AA08581; Mon, 09 Jan 1995 16:56:39 EST Message-Id: <9501092156.AA08581@mysun.xyz.edu> To: angelac Subject: Here’s the first message you asked for Date: Mon, 09 Jan 1995 16:56:37 -0600 From: "Joe Doe" <joed> Hi, Angela! You asked me to send you a message. Here it is. I hope this is okay and that you can figure out how to use that mail system. Joe

You could reply to the message easily:

% repl To: "Joe Doe" <joed> cc: angelac Subject: Re: Here’s the first message you asked for In-reply-to: Your message of "Mon, 09 Jan 1995 16:56:37 -0600." <9501092156.AA08581@mysun.xyz.edu> ------- % edit vi

You could then edit your reply in vim which is actually pretty cool.

Interestingly enough, in June of 1996 the guide “MH & xmh: Email for Users & Programmers” was actually the first book in history to be published on the Internet.

Pine, Elm & Mutt

All mail clients suck. This one just sucks less.

— Mutt Slogan

It took several years until terminals became powerful enough, and perhaps email pervasive enough, that a more graphical program was required. In 1986 Elm was introduced, which allowed you to interact with your email more interactively.

The History of Email Elm Mail Client

This was followed by more graphical TUI clients like Mutt and Pine.

In the words of the University of Washington’s Pine team:

Our goal was to provide a mailer that naive users could use without fear of making mistakes. We wanted to cater to users who were less interested in learning the mechanics of using electronic mail than in doing their jobs; users who perhaps had some computer anxiety. We felt the way to do this was to have a system that didn’t do surprising things and provided immediate feedback on each operation; a mailer that had a limited set of carefully-selected functions.

These clients were becoming gradually easier and easier to use by non-technical people, and it was becoming clear how big of a deal this really was:

We in the ARPA community (and no doubt many others outside it) have come to realize that we have in our hands something very big, and possibly very important. It is now plain to all of us that message service over computer networks has enormous potential for changing the way communication is done in all sectors of our society: military, civilian government, and private.

Webmail

Its like when I did the referer field. I got nothing but grief for my choice of spelling. I am now attempting to get the spelling corrected in the OED since my spelling is used several billion times a minute more than theirs.

— Phillip Hallam-Baker on his spelling of ’Referer’ 2000

The first webmail client was created by Phillip Hallam-Baker at CERN in 1994. Its creation was early enough in the history of the web that it led to the identification of the need for the Content-Length header in POST requests.

Hotmail was released in 1996. The name was chosen because it included the letters HTML to emphasize it being ‘on the web’ (it was original stylized as ‘HoTMaiL’). When it was launched users were limited to 2MB of storage (at the time a 1.6GB hard drive was $399).

Hotmail was originally implemented using FreeBSD, but in a decision I’m sure every engineer regretted, it was moved to Windows 2000 after the service was bought by Microsoft. In 1999, hackers revealed a security flaw in Hotmail that permitted anybody to log in to any Hotmail account using the password ‘eh’. It took until 2001 for ‘hackers’ to realize you could access other people’s messages by swap usernames in the URL and guessing at a valid message number.

The History of Email

Gmail was famously created in 2004 as a ‘20% project’ of Paul Buchheit. Originally it wasn’t particularly believed in as a product within Google. They had to launch using a few hundred Pentium III computers no one else wanted, and it took three years before they had the resources to accept users without an invitation. It was notable both for being much closer to a desktop application (using AJAX) and for the unprecedented offer of 1GB of mail storage.

The Future The History of Email US Postal Mail Volume, KPCB

At this point email is a ubiquitous enough communication standard that it’s very possible postal mail as an everyday idea will die before I do. One thing which has not survived well is any attempt to replace email with a more complex messaging tool like Google Wave. With the rise of more targeted communication tools like Slack, Facebook, and Snapchat though, you never know.

There is, of course, a cost to that. The ancestors of the Internet were kind enough to give us a communication standard which is free, transparent, and standardized. It would be a shame to see the tech communication landscape move further and further into the world of locked gardens and proprietary schemas.

We’ll leave you with two quotes:

Mostly because it seemed like a neat idea. There was no directive to ‘go forth and invent e-mail’.

— Ray Tomlinson, answering a question about why he invented e-mail

Permit me to carry the doom-crying one step further. I am curious whether the increasingly easy access to computers by adolescents will have any effect, however small, on their social development. Keep in mind that the social skills necessary for interpersonal relationships are not taught; they are learned by experience. Adolescence is probably the most important time period for learning these skills. There are two directions for a cause-effect relationship. Either people lacking social skills (shy people, etc.) turn to other pasttimes, or people who do not devote enough time to human interactions have difficulty learning social skills. I do not [consider] whether either or both of these alternatives actually occur. I believe I am justified in asking whether computers will compete with human interactions as a way of spending time? Will they compete more effectively than other pasttimes? If so, and if we permit computers to become as ubiquitous as televisions, will computers have some effect (either positive or negative) on personal development of future generations?

— Gary Feldman, 1981

  • Use Cloudflare Apps to build tools which can be installed by millions of sites.

    Build an app →

    If you're in San Francisco, London or Austin: work with us.

  • Our next post is on the history of the URL!
    Get notified when new apps and apps-related posts are released:

    Email Address
(function($) {window.fnames = new Array(); window.ftypes = new Array();fnames[0]='EMAIL';ftypes[0]='email';fnames[1]='FNAME';ftypes[1]='text';fnames[2]='LNAME';ftypes[2]='text';}(jQuery));var $mcj = jQuery.noConflict(true);

/* Social */ .social { margin-top: 1.3em; } .fb_iframe_widget { padding-right: 1px; } .IN-widget { padding-left: 11px; } /* Hide period after author */ .post-header .meta a { border-right: 5px solid white; margin-right: -5px; position: relative; } /* Post */ body { background-color: white; } pre, code { font-size: inherit; line-height: inherit; } section.primary-content { font-size: 16px; line-height: 1.6; color: black; } blockquote { padding-bottom: 1.5em; padding-top: 1em; font-style: italic; font-size: 1.25rem; } blockquote.pull-quote-centered { font-size: 1.2em; text-align: center; max-width: 100%; margin-left: auto; margin-right: auto; } blockquote blockquote { margin-left: 1em; padding-left: 1em; border-left: 5px solid rgba(0, 0, 0, 0.2); padding-bottom: 0.5em; padding-top: 0.5em; margin-bottom: 0.5em; margin-top: 0.5em; } figure.standard { position: relative; max-width: 100%; margin: 1em auto; text-align: center; z-index: -1; } .figcaption { padding-top: .5em; font-size: .8em; color: #888; font-weight: 300; letter-spacing: .03em; line-height: 1.35; } .figcontent { display: inline-block; } p.attribution { color: #666; font-size: 0.9em; padding-bottom: 1em; } a code.year { text-decoration: underline; } .closing-cards #mc_embed_signup .mc-field-group { margin: 0.75em 0; } .closing-cards #mc_embed_signup input { font-size: 1.5em; height: auto; } .closing-cards #mc_embed_signup input[type="email"] { border: 1px solid #bcbcbc; border-radius: 2px; margin-bottom: 0; } .closing-cards #mc_embed_signup input[type="submit"] { background: #f38020; color: #fff; padding: .8em 1em .8em 1em; white-space: nowrap; line-height: 1.2; text-align: center; border-radius: 2px; border: 0; display: inline-block; text-rendering: optimizeLegibility; -webkit-tap-highlight-color: transparent; -webkit-font-smoothing: subpixel-antialiased; user-select: none; -webkit-appearance: none; appearance: none; letter-spacing: .04em; text-indent: .04em; cursor: pointer; } .closing-cards #mc_embed_signup div.mce_inline_error { background-color: transparent; color: #C33; padding: 0; display: inline-block; font-size: 0.9em; } .closing-cards #mc_embed_signup p:not(:empty) { line-height: 1.5; margin-bottom: 2em; } .closing-cards #mc_embed_signup input[type="email"] { font-size: 20px !important; width: 100% !important; padding: .6em 1em !important; } .closing-cards #mc_embed_signup .mc-field-group { margin: 0 !important; } .closing-cards #mc_embed_signup input[type="submit"] { font-size: 20px !important; margin-top: .5em !important; padding: .6em 1em !important; } .closing-cards #mc_embed_signup div.mce_inline_error { padding: 0; margin: 0; color: #F38020 !important; } aside.section.learn-more { display: none; } .closing-cards { background: #eee; width: 100%; list-style-type: none; margin-left: 0; } .closing-card { width: calc(50% - 10px) !important; font-size: 20px; padding: 1.5em; display: inline-block; box-sizing: border-box; vertical-align: top; } @media (max-width: 788px){ .closing-card { width: 100% !important; } .closing-card + .closing-card { border-top: 10px solid white; } }
Categories: Technology

A New API Binding: cloudflare-php

CloudFlare - Sat, 23/09/2017 - 01:01
 cloudflare-php

 cloudflare-php

Back in May last year, one of my colleagues blogged about the introduction of our Python binding for the Cloudflare API and drew reference to our other bindings in Go and Node. Today we are complimenting this range by introducing a new official binding, this time in PHP.

This binding is available via Packagist as cloudflare/sdk, you can install it using Composer simply by running composer require cloudflare/sdk. We have documented various use-cases in our "Cloudflare PHP API Binding" KB article to help you get started.

Alternatively should you wish to help contribute, or just give us a star on GitHub, feel free to browse to the cloudflare-php source code.

 cloudflare-php

PHP is a controversial language, and there is no doubt there are elements of bad design within the language (as is the case with many other languages). However, love it or hate it, PHP is a language of high adoption; as of September 2017 W3Techs report that PHP is used by 82.8% of all the websites whose server-side programming language is known. In creating this binding the question clearly wasn't on the merits of PHP, but whether we wanted to help drive improvements to the developer experience for the sizeable number of developers integrating with us whilst using PHP.

In order to help those looking to contribute or build upon this library, I write this blog post to explain some of the design decisions made in putting this together.

Exclusively for PHP 7

PHP 5 initially introduced the ability for type hinting on the basis of classes and interfaces, this opened up (albeit seldom used) parametric polymorphic behaviour in PHP. Type hinting on the basis of interfaces made it easier for those developing in PHP to follow the Gang of Four's famous guidance: "Program to an 'interface', not an 'implementation'."

Type hinting has slowly developed in PHP, in PHP 7.0 the ability for Scalar Type Hinting was released after a few rounds of RFCs. Additionally PHP 7.0 introduced Return Type Declarations, allowing return values to be type hinted in a similar way to argument type hinting. In this library we extensively use Scalar Type Hinting and Return Type Declarations thereby restricting the backward compatibility that's available with PHP 5.

In order for backward compatibility to be available, these improvements to type hinting simply would not be implementable and the associated benefits would be lost. With Active Support no longer being offered to PHP 5.6 and Security Support little over a year away from disappearing for the entirety of PHP 5.x, we decided the additional coverage wasn't worth the cost.

 cloudflare-php

Object Composition

What do we mean by a software architecture? To me the term architecture conveys a notion of the core elements of the system, the pieces that are difficult to change. A foundation on which the rest must be built. Martin Fowler

When getting started with this package, you'll notice there are 3 classes you'll need to instantiate:

$key = new \Cloudflare\API\Auth\APIKey('user@example.com', 'apiKey'); $adapter = new Cloudflare\API\Adapter\Guzzle($key); $user = new \Cloudflare\API\Endpoints\User($adapter); echo $user->getUserID();

The first class being instantiated is called APIKey (a few other classes for authentication are available). We then proceed to instantiate the Guzzle class and the APIKey object is then injected into the constructor of the Guzzle class. The Auth interface that the APIKey class implements is fairly simple:

namespace Cloudflare\API\Auth; interface Auth { public function getHeaders(): array; }

The Adapter interface (which the Guzzle class implements) makes explicit that an object built on the Auth interface is expected to be injected into the constructor:

namespace Cloudflare\API\Adapter; use Cloudflare\API\Auth\Auth; use Psr\Http\Message\ResponseInterface; interface Adapter { ... public function __construct(Auth $auth, String $baseURI); ... }

In doing so; we define that classes which implement the Adapter interface are to be composed using objects made from classes which implement the Auth interface.

So why am I explaining basic Dependency Injection here? It is critical to understand as the design of our API changes, the mechanisms for Authentication may vary independently of the HTTP Client or indeed API Endpoints themselves. Similarly the HTTP Client or the API Endpoints may vary independently of the other elements involved. Indeed, this package already contains three classes for the purpose of authentication (APIKey, UserServiceKey and None) which need to be interchangeably used. This package therefore considers the possibility for changes to different components in the API and seeks to allow these components to vary independently.

Dependency Injection is also used where the parameters for an API Endpoint become more complicated then what is permitted by simpler variables types; for example, this is done for defining the Target or Configuration when configuring a Page Rule:

require_once('vendor/autoload.php'); $key = new \Cloudflare\API\Auth\APIKey('mjsa@junade.com', 'apiKey'); $adapter = new Cloudflare\API\Adapter\Guzzle($key); $zones = new \Cloudflare\API\Endpoints\Zones($adapter); $zoneID = $zones->getZoneID("junade.com"); $pageRulesTarget = new \Cloudflare\API\Configurations\PageRulesTargets('https://junade.com/noCache/*'); $pageRulesConfig = new \Cloudflare\API\Configurations\PageRulesActions(); $pageRulesConfig->setCacheLevel('bypass'); $pageRules = new \Cloudflare\API\Endpoints\PageRules($adapter); $pageRules->createPageRule($zoneID, $pageRulesTarget, $pageRulesConfig, true, 6);

The structure of this project is overall based on simple object composition; this provides a far more simple object model for the long-term and a design that provides higher flexibility. For example; should we later want to create an Endpoint class which is a composite of other Endpoints, it becomes fairly trivial for us to build this by implementing the same interface as the other Endpoint classes. As more code is added, we are able to keep the design of the software relatively thinly layered.

Testing/Mocking HTTP Requests

If you're interesting in helping contribute to this repository; there are two key ways you can help:

  1. Building out coverage of endpoints on our API
  2. Building out test coverage of those endpoint classes

The PHP-FIG (PHP Framework Interop Group) put together a standard on how HTTP responses can be represented in an interface, this is described in the PSR-7 standard. This response interface is utilised by our HTTP Adapter interface in which responses to API requests are type hinted to this interface (Psr\Http\Message\ResponseInterface).

By using this standard, it's easier to add further abstractions for additional HTTP clients and mock HTTP responses for unit testing. Let's assume the JSON response is stored in the $response variable and we want to test the listIPs method in the IPs Endpoint class:

public function testListIPs() { $stream = GuzzleHttp\Psr7\stream_for($response); $response = new GuzzleHttp\Psr7\Response(200, ['Content-Type' => 'application/json'], $stream); $mock = $this->getMockBuilder(\Cloudflare\API\Adapter\Adapter::class)->getMock(); $mock->method('get')->willReturn($response); $mock->expects($this->once()) ->method('get') ->with($this->equalTo('ips'), $this->equalTo([]) ); $ips = new \Cloudflare\API\Endpoints\IPs($mock); $ips = $ips->listIPs(); $this->assertObjectHasAttribute("ipv4_cidrs", $ips); $this->assertObjectHasAttribute("ipv6_cidrs", $ips); }

We are able to build a simple mock of our Adapter interface by using the standardised PSR-7 response format, when we do so we are able to define what parameters PHPUnit expects to be passed to this mock. With a mock Adapter class in place we are able to test the IPs Endpoint class as any if it was using a real HTTP client.

Conclusions

Through building on modern versions of PHP, using good Object-Oriented Programming theory and allowing for effective testing we hope our PHP API binding provides a developer experience that is pleasant to build upon.

If you're interesting in helping improve the design of this codebase, I'd encourage you to take a look at the PHP API binding source code on GitHub (and optionally give us a star).

If you work with Go or PHP and you're interested in helping Cloudflare turn our high-traffic customer-facing API into an ever more modern service-oriented environment; we're hiring for Web Engineers in San Francisco, Austin and London.

Categories: Technology

Project Jengo Strikes Its First Targets (and Looks for More)

CloudFlare - Thu, 21/09/2017 - 17:02

Jango Fett by Brickset (Flickr)

When Blackbird Tech, a notorious patent troll, sued us earlier this year for patent infringement, we discovered quickly that the folks at Blackbird were engaged in what appeared to be the broad and unsubstantiated assertion of patents -- filing about 115 lawsuits in less than 3 years, and have not yet won a single one of those cases on the merits in court. Cloudflare felt an appropriate response would be to review all of Blackbird Tech’s patents, not just the one it asserted against Cloudflare, to determine if they are invalid or should be limited in scope. We enlisted your help in this endeavor by placing a $50,000 bounty on prior art that proves the Blackbird Tech patents are invalid or overbroad, an effort we dubbed Project Jengo.

Since its inception, Project Jengo has doubled in size and provided us with a good amount of high quality prior art submissions. We have received more than 230 submissions so far, and have only just begun to scratch the surface. We have already come across a number of standouts that appear to be strong contenders for invalidating many of the Blackbird Tech patents. This means it is time for us to launch the first formal challenge against a Blackbird patent (besides our own), AND distribute the first round of the bounty to 15 recipients totaling $7,500.

We’re just warming up. We provide information below on how you can identify the next set of patents to challenge, help us find prior art to invalidate those targets, and collect a bit of the bounty for yourselves.

I. Announcing Project Jengo’s First Challenges (and Awards!)

We wrote previously about the avenues available to challenge patents short of the remarkable cost and delay of federal court litigation; the exact cost and delay that some Blackbird targets are looking to avoid through settlement. Specifically, we explained the process of challenging patents through inter partes review (“IPR”) and ex parte reexamination (“EPR”).

Based on the stellar Prior Art submissions, we have identified the first challenge against a Blackbird patent.

U.S. Patent 7,797,448 (“GPS-internet Linkage”)

The patent, which has a priority date of October 28, 1999, describes in broad and generic terms “[a]n integrated system comprising the Global Positioning System and the Internet wherein the integrated system can identify the precise geographic location of both sender and receiver communicating computer terminals.” It is not hard to imagine that such a broadly-worded patent could potentially be applied against a massive range of tech products that involve any GPS functionality. The alarmingly simplistic description of the patented innovation is confirmed by the only image submitted in support of the patent application, which shows only two desktop computers, a hovering satellite, and a triangle of dotted lines connecting the three items.

Blackbird filed suit in July 2016 against six companies asserting this ‘448 patent. All of those cases were voluntarily dismissed by Blackbird within three months -- fitting a pattern where Blackbird was only looking for small settlements from defendants who sought to avoid the costs and delays of litigation. A successful challenge that invalidates or limits the scope of this patent could put an end to such practices.

Project Jengo’s Discovery - The patent claims priority to a provisional application filed October 28, 1999, but Project Jengo participants sourced four different submissions that raise serious questions about the novelty of the ‘448 patent when it was filed:

  • Research literature from April 1999 describing a system utilizing GPS cards for addressing terminals connected to the internet. “GPS-Based Geographic Addressing, Routing, and ResourceDiscovery,” Tomasz Imielinski and Julio C. Navos, Vol 42, No. 4 COMMUNICATIONS OF THE ACM (pgs. 86-92).

  • A request for comment from the Internet Engineering Task Force on a draft research paper from November 1996 on “integrating GPS-based geographic information into the Internet Protocol.” IETF RFC 2009

  • One submission included seven patents that all pre-date the priority date of the ‘448 patent (as early as July 1997) and address similar--yet more specific--efforts to use GPS location systems with computer systems.

  • And on a less-specific but still relevant basis, one submitter points to the APRS system that has been used by Ham Radio enthusiasts and has tagged communications with GPS location for decades.

Project Jengo participants who provided these submissions will each be given an award of $500!

What we plan to do -- Because this patent is written (and illustrated) in such broad terms, Blackbird has shown a willingness to sue under this patent, and Project Jengo has uncovered significant prior art, we think this case provides a promising basis to challenge the ‘448 patent. We are preparing an ex parte reexamination of the ‘448 patent, which we expect to file with the US Patent and Trademark Office in October. Again, you can read about an ex parte challenge here. We expect that after review, the USPTO will invalidate the patent. Although future challenges may be funded through crowdsourcing or other efforts, we will be able to fund this challenge fully through funds already set aside for Project Jengo, even though this patent doesn’t implicate Cloudflare’s services.

US Patent 6,453,335 (the one asserted against Cloudflare)

Project Jengo participants have also done an incredible job identifying relevant prior art on the patent asserted against Cloudflare by Blackbird Tech. Blackbird claims that the patent describes a system for monitoring an existing data channel and inserting error pages when transmission rates fall below a certain level. We received a great number of submissions on that patent and are continuing our analysis.

Cloudflare recently filed a brief with the U.S. District Court in which we pointed to eleven pieces of prior art submitted by Jengo participants that we expect will support invalidity in the litigation:

Bounty hunters who first submitted this prior art that was already used in the case will each receive $500. The Project Jengo Team at Cloudflare is continuing analysis of all the prior art submissions, and we still need your help! The litigation is ongoing and we will continue to provide a bounty to prior art submissions that are used to invalidate the Blackbird patents.

The Search Goes On… with new armor

These challenges to Blackbird patents are only the start. Later in this blog post, we provide an extensive report on the status of the search for prior art on all the Blackbird patents, and include a number of new patents we’ve uncovered. Keep looking for prior art on the Blackbird patents, we still have plenty of bounties to award and a number of patents ripe for a challenge. You can send us your prior art submissions here.

Even if you didn’t receive a cash award (yet), our t-shirts are about to hit the streets! Everyone who submitted prior art to Project Jengo will be receiving a t-shirt. If you previously made a submission, we’ve emailed you instructions for ordering your shirt. This offer will remain open for the duration of Project Jengo for anyone that submits new prior art on any of the Blackbird patents. Enjoy your new armor!

II. Elsewhere in Project Jengo...

Ethics complaint update

We know Blackbird’s “new model” is dangerous to innovation and merits scrutiny, so we previously lodged ethics complaints against Blackbird Tech with the bar disciplinary committees in Massachusetts and Illinois. This week, we sent an additional letter to the USPTO’s Office of Enrollment and Discipline asking them to look into possible violations of the USPTO Rules of Professional Conduct. As with the other jurisdictions, the USPTO Rules of Professional Conduct prohibit attorneys from acquiring a proprietary interest in the lawsuit (Rule §11.108(i)), sharing fees or equity with non-lawyers (Rules 11.504(a) and 11.504(d). Blackbird’s “new model” seems to violate these ethical standards.

Getting the word out
Cloudflare’s Project Jengo continues to drive conversation about the corrosive problem of patent trolls. Since our last blog update, our efforts have continued to draw attention in the press. For the latest, you can see...


“The hunted becomes the hunter: How Cloudflare’s fight with a ‘patent troll’ could alter the game,” -- TechCrunch


“Cloudflare gets another $50,000, to fight ‘new breed of patent troll,’” -Ars Technica


“This 32-year-old state senator is trying to get patent trolls out of Massachusetts,” -- TechCrunch


III. A Progress Report on Challenges to the Blackbird Patents

As you continue your search for prior art as part of Project Jengo, we’ve updated our chart of Blackbird patents, and identified a number of new patents and applications we’ve found that Blackbird has acquired.

As reflected on the chart (in red), so far 5 of the patents are being challenged or have been invalidated. In addition to our pending challenge of the ‘448 patent:

  • In June 2016, Blackbird Tech sued software maker kCura LLC and nine of its resellers for allegedly infringing U.S. Patent 7,809717, which was described as a Method and Apparatus for Concept-based Visual Presentation of Search Results. kCura makes specialized software used by law firms during document review. The judge in kCura’s case invalidated every claim in the ‘717 patent because the “abstract idea” of using a computer instead of a lawyer to perform document review cannot be patented.

  • US Patent 6,434,212 -- This patent seeks protection for “a pedometer having improved accuracy by calculating actual stride lengths.” Numerous challenges to this patent have been filed with the Patent Trial and Appeal Board (PTAB), which adjudicates some IPR challenges. There are currently challenges against this “Pedometer” patent that have been filed by Garmin, TomTom and Fitbit.

  • US Patent 7,129,931 -- This patent for a “multipurpose computer display system” is undergoing IPR challenge brought by Lenovo, Inc.

  • US Patent 7,174,362 -- This patent for a “method and system for supplying products from pre-stored digital data in response to demands transmitted via computer network” was challenged by Unified Patents, Inc.

In the charts below, we’ve highlighted 11 Blackbird patents (in green) that seem ripe for challenge -- based on a combination of the fact that they seem broadly applicable to important industries, may have already been the basis of a Blackbird lawsuit, and/or already have some valuable prior art sourced through Project Jengo. We’ll take submissions on any Blackbird patent, but these are the patents we’re focused on and should get extra attention from Project Jengo participants seeking a bounty.

After our review is a bit further down the road, we’ll make all the prior art we’ve received on these patents available to the public so that anyone facing a challenge from Blackbird can defend themselves. We hope to have that information posted by the end of October.

And finally, Cloudflare is funding the first ex parte challenge fully out of funds it has set aside or had donated to Project Jengo. Should any of these patents hit home for you, and you are interested in supporting this fight financially, please reach out to Jengo@cloudflare.com.

-Happy Hunting!

-Project Jengo Submissions PatentProject Jengo SubmissionsCases Brought by BlackbirdPriority Date (M/D/YR)6175608 - - PEDOMETER 3110/28/986188683 - - SYSTEM AND METHOD FOR ESTABLISHING LONG DISTANCE DISTANCE VOICE COMMUNICATIONS USING THE INTERNET302/19/97 6425349 - - BICYCLE PET CARRIER7136/14/016434212 - - PEDOMETER3710/28/986450222 - - NON-PNEUMATIC TIRE HAVING AN ELASTOMERIC HOOP107/14/996453303 - - AUTOMATED ANALYSIS FOR FINANCIAL ASSETS108/16/996557948 - - BRAKING APPARATUS FOR A VEHICLE1312/15/976816085 - - METHOD FOR MANAGING A PARKING LOT111/14/006823036 - - WRISTWATCH-TYPED PEDOMETER WITH WIRELESS HEARTBEAT SIGNAL RECEIVING DEVICE209/24/036956338 - - ANALOG CONTROL OF LIGHT SOURCES108/12/037081036 - - BUTTOCK LIFT SUPPORT195/9/0364533335 - - PROVIDING AN INTERNET THIRD PARTY DATA CHANNEL10227/21/987086747 - - LOW-VOLTAGE LIGHTING APPARATUS FOR SATISFYING AFTER-HOURS LIGHTING REQUIREMENTS, EMERGENCY LIGHTING REQUIREMENTS, AND LOW LIGHT REQUIREMENTS12212/11/027086751 - - ILLUMINATED PRODUCT PACKAGING506/27/037106183 - - REARVIEW CAMERA AND SENSOR SYSTEM FOR VEHICLES158/26/047114834 - - LED LIGHTING APPARATUS5139/23/027129931 - - MULTIPURPOSE COMPUTER DISPLAY SYSTEM119/14/017162378 - - POINT OF PLAY TERMINAL103/12/047174362 - - METHOD AND SYSTEM FOR SUPPLYING PRODUCTS FROM PRE-STORED DIGITAL DATA IN RESPONSE TO DEMANDS TRANSMITTED VIA COMPUTER NETWORK2611/21/007230392 - - ANALOG CONTROL OF LIGHT SOURCES208/12/037752243 - - METHOD AND APPARATUS FOR CONSTRUCTION AND USE OF CONCEPT KNOWLEDGE BASE206/6/067752557 - - METHOD AND APPARATUS OF VISUAL REPRESENTATIONS OF SEARCH RESULTS108/29/067788261 - - INTERACTIVE WEB INFORMATION RETRIEVAL USING GRAPHICAL WORD INDICATORS2012/14/067797448 - - GPS-INTERNET LINKAGE8610/28/997809717 - - METHOD AND APPARATUS FOR CONCEPT-BASED VISUAL PRESENTATION OF SEARCH RESULTS1106/6/067830245 - - SYSTEM AND METHOD FOR POSITIONING A VEHICLE OPERATOR103/14/057867058 - - SPORTS BRA454/17/078106569 - - LED RETROFIT FOR MINIATURE BULBS205/12/098996546 - - INTERNET BASED RESOURCE RETRIEVAL SYSTEM105/28/049620989 - - RECHARGEABLE BATTERY ACCESSORIES523/13/13 PUB20130141903 - - LED LIGHTING APPARATUS309/23/03PUB20130213082 - - METHOD AND APPARATUS FOR A DISTRIBUTED COOLING SYSTEM FOR ELECTRONIC EQUIPMENT ENCLOSURES2012/22/05PUB20140200078 - - VIDEO GAME INCLUDING USER DETERMINED LOCATION INFORMATION3404/12/116602045 - - WINGTIP WINDMILL AND METHOD OF USE002/5/006883927 - - FRAME ASSEMBLY AND LIGHT FOR AN ELECTRICAL WALL CONDUIT011/31/006552888 - - SAFETY ELECTRICAL OUTLET WITH LOGIC CONTROL CIRCUIT001/22/016705976 - - EXERCISE APPARATUS048/6/006460940 - - SUPPLEMENTAL BRAKE SYSTEM0011/8/94 -Newly Uncovered Blackbird Patents PatentCases Brought by BlackbirdPriority Date (M/D/YR)8478512 - - REAL-TIME TRAFFIC CONDITION MEASUREMENT USING NETWORK TRANSMISSION DATA02/5/048489314 - - REAL-TIME TRAFFIC CONDITION MEASUREMENT AND PRESENTATION OF USER-BASED ROUTE DATA02/5/048542111 - - PROGRAMMABLE COMMUNICATOR05/23/008633802 - - PROGRAMMABLE COMMUNICATOR05/23/008744761 - - METHOD AND SYSTEM FOR PROVIDING TRAVEL TIME INFORMATION02/5/048866589 - - PROGRAMMABLE COMMUNICATOR05/23/008648717 - - PROGRAMMABLE COMMUNICATOR05/23/006804225 - - SYSTEM AND METHOD FOR ESTABLISHING LONG DISTANCE VOICE COMMUNICATIONS USING THE INTERNET03/3/977116657 - - SYSTEM AND METHOD FOR ESTABLISHING LONG DISTANCE CALL CONNECTIONS USING A DESKTOP APPLICATION08/22/008094010 - - PROGRAMMABLE COMMUNICATOR05/23/008548719 - - REAL-TIME TRAFFIC CONDITION MEASUREMENT02/5/049424848 - - METHOD FOR SECURE TRANSACTIONS UTILIZING PHYSICALLY SEPARATED COMPUTERS06/9/008306746 - - METHOD AND SYSTEM FOR PROVIDING TRAVEL TIME INFORMATION02/5/048380429 - - REAL-TIME TRAFFIC CONDITION MEASUREMENT USING GPS DATA02/5/047740234 - - METHOD AND APPARATUS FOR A LOW-PROFILE SUSPENSION SYSTEM012/22/059243927 - - METHOD AND SYSTEM FOR PROVIDING TRAVEL TIME INFORMATION02/5/049679286 - - METHODS AND APPARATUS FOR ENABLING SECURE NETWORK-BASED TRANSACTIONS09/20/057556271 - - METHOD AND APPARATUS FOR AN ELETRONIC EQUIPMENT RACK012/22/059400190 - - REAL-TIME TRAFFIC CONDITION MEASUREMENT USING NETWORK TRANSMISSION DATA12/5/047636430 - - TOLL-FREE CALL ORIGINATION USING AN ALPHANUMERIC CALL INITIATOR 011/1/017611157 - - METHOD AND APPARATUS FOR AN ELETRONIC EQUIPMENT RACK012/22/058855905 - - REAL-TIME TRAFFIC CONDITION MEASUREMENT USING NETWORK TRANSMISSION DATA 02/5/048424885 - - METHOD AND APPARATUS FOR AN ENVIRONMENTALLY-PROTECTED ELETRONIC EQUIPMENT ENCLOSURE112/22/057522995 - - METHOD AND SYSTEM FOR PROVIDING TRAVEL TIME INFORMATION02/5/047904240 - - METHOD AND SYSTEM FOR PROVIDING TRAVEL TIME INFORMATION02/5/047958214 - - METHOD FOR SECURE TRANSACTIONS UTILIZING PHYSICALLY SEPARATED COMPUTERS06/9/009086295 - - REAL-TIME TRAFFIC CONDITION MEASUREMENT USING NETWORK TRANSMISSION DATA02/5/048457871 - - REAL-TIME TRAFFIC CONDITIONS MEASUREMENT AND PRESENTATION OF SPONSORED CONTENT02/5/048715087 - - VIDEO GAME INCLUDING USER DETERMINED LOCATION INFORMATION04/12/118285832 - - METHOD FOR SECURE TRANSACTIONS UTILISING PHYSICALLY SEPARATED COMPUTERS06/9/007583197 - - PROGRAMMABLE COMMUNICATOR05/23/009014972 - - METHOD AND SYSTEM FOR PROVIDING TRAVEL TIME INFORMATION02/5/047628409 - - METHOD AND APPARATUS FOR AN ELETRONIC EQUIPMENT RACK012/22/057461849 - - METHOD AND APPARATUS FOR AN ELETRONIC EQUIPMENT RACK012/22/056879678 - - SYSTEM AND METHOD FOR ESTABLISHING LONG DISTANCE CALL CONNECTIONS USING A PERSONAL COMMUNICATION ASSISTANT011/13/006694007 - - SYSTEM AND METHOD FOR ESTABLISHING LONG DISTANCE CALL CONNECTIONS USING ELETRONIC TEXT MESSAGES03/22/01US20130081778 - - METHOD AND APPARATUS FOR A CLOSE-COUPLED COOLING SYSTEM010/3/11US20170194800 - - BATTERY PACK03/13/13US20160102993 - - METHOD AND SYSTEM FOR PROVIDING TRAVEL TIME INFORMATION02/5/04US20150326992 - - PROGRAMMABLE COMMUNICATOR05/23/00
Categories: Technology

Imunify360 2.4-39 release

CloudLinux - Thu, 21/09/2017 - 07:25

We are pleased to announce that the new Imunify360 2.4-39 production version is now available.

Should you encounter any problems with the product or have any questions, comments or suggestions, please contact our support team at cloudlinux.zendesk.com: Imunify360 department. We’d be more than happy to help you.

Imunify360 2.4-39

Changelog:

  • DEF-3082: fixed Plesk extension installation on el/cl6;
  • DEF-2968: fixed process modsec scan session file fail;
  • DEF-3046: deploy script checks for updates from Imunify360 repo.

To install new Imunify360 production version 2.4-39 please follow the instructions in the documentation.

To upgrade Imunify360 run:

yum clean all yum update imunify360-firewall

More information on Imunify360 can be found here.

Categories: Technology

Beta: Imunify360 2.5.2 released

CloudLinux - Thu, 21/09/2017 - 07:04

We are pleased to announce that the new updated beta Imunify360 version 2.5.2 is now available. This latest version embodies further improvements of the product as well as the new features. Imunify360 has also become more reliable and stable due to the bug fixes described below.

Should you encounter any problems with the product or have any questions, comments or suggestions, please contact our support team at cloudlinux.zendesk.com: Imunify360 department. We’d be more than happy to help you.

Imunify360 2.5.2

Changelog:

Improvements:

  • DEF-2867: turn off options if conflicts detected in UI;
  • DEF-2946: show TTL of IP;
  • DEF-2979: created cli to manage whitelisted search engines;
  • DEF-3046: deployed script checks for updates from imunify360 repository.

Fixes:

  • DEF-2994: fixed bug with inotify watcher;
  • DEF-3043: agent detects outdated ModSecurity vendors;
  • DEF-3078: fixed broken web scans;
  • DEF-3104: fixed "imunify360: unrecognized service" on first install;
  • DEF-3082: fixed plesk extension installation on el/cl6.

To install new beta Imunify360 version 2.5.2 please follow the instructions in the documentation.

Note. Upgrade is available since Imunify360 version 2.0-19.

To upgrade Imunify360 run:

yum update imunify360-firewall --enablerepo=imunify360-testing

More information on Imunify360 can be found here.

Categories: Technology

Skype Status - Moderately Critical - Cross Site Scripting - DRUPAL-SA-CONTRIB-2017-076

Drupal Contrib Security - Wed, 20/09/2017 - 19:48
Description

This module enables you to obtain the status for a user's Skype account

The module doesn't sufficiently sanitize the user input for their Skype ID.

This vulnerability is mitigated by the fact that an attacker must have an account on the site and be allowed to edit/input their Skype ID.

CVE identifier(s) issued
  • A CVE identifier will be requested, and added upon issuance, in accordance with Drupal Security Team processes.
Versions affected
  • Skype Status (skype_status) 7.x-1.x versions prior to 7.x-1.2.

Drupal core is not affected. If you do not use the contributed Skype Status module, there is nothing you need to do.

Solution

Install the latest version:

Also see the Skype Status project page.

Reported by Fixed by Coordinated by Contact and More Information

The Drupal security team can be reached at security at drupal.org or via the contact form at https://www.drupal.org/contact.

Learn more about the Drupal Security team and their policies, writing secure code for Drupal, and securing your site.

Follow the Drupal Security Team on Twitter at https://twitter.com/drupalsecurity

Drupal version: Drupal 7.x
Categories: Technology

Page Access - Unsupported - SA-CONTRIB-2017-75

Drupal Contrib Security - Wed, 20/09/2017 - 19:43
  • Advisory ID: DRUPAL-SA-CONTRIB-2017-75
  • Project: Page Access (third-party module)
  • Date: 20-September-2017
Description

This module will provide the option to give the View and Edit access for users and roles on each node pages.

The security team is marking this module unsupported. There is a known security issue with the module that has not been fixed by the maintainer. If you would like to maintain this module, please read: https://www.drupal.org/node/251466

Versions affected
  • All versions

Drupal core is not affected. If you do not use the contributed Page Access module, there is nothing you need to do.

Solution

If you use the Page Access module for Drupal you should uninstall it.

Also see the Page Access project page.

Reported by Fixed by

Not applicable

Contact and More Information

The Drupal security team can be reached at security at drupal.org or via the contact form at https://www.drupal.org/contact.

Learn more about the Drupal Security team and their policies, writing secure code for Drupal, and securing your site.

Follow the Drupal Security Team on Twitter at https://twitter.com/drupalsecurity

Categories: Technology

XeonBD brings secure, Imunify360-powered web hosting to Bangladesh

CloudLinux - Wed, 20/09/2017 - 19:42

Recently we have launched our new Imunify360 Provider Directory, which helps website owners find hosting providers that offer plans with Imunify360 protection. It also helps hosting providers accelerate sales by promoting the Imunify360-powered security offerings that make customer websites more secure. Here is a look at one of those companies.

- - -

Guest author: Kazi S M Nazmus Saqib, Founder and Chief Executive Officer of XeonBD

XeonBD is one of the largest web solution providers servicing most of the financial and corporate companies in Bangladesh. Security is extremely important to our team and to our customers. CloudLinux has already changed the way we utilize, manage, and control our shared, VPS and dedicated server hosting services. We have been using CloudLinux OS since 2010 in all of our shared servers. In 2015, we have deployed KernelCare for all our shared, VPS nodes, and dedicated servers to keep our kernels secure and eliminate unwanted reboots.

Before Igor Seletskiy and his team released Imunify360, my team had many hectic days and nights dealing with all the security issues that were affecting our shared server customers. Since we have a strong belief in the products that have been developed by CloudLinux, we’ve installed Imunify360 on all our shared servers during the first days of its beta release. We wanted to ensure maximum protection for all our shared and corporate hosting clients and resellers.

What especially impressed us is the Imunify360's efficient machine-learning technology and sophisticated information collection process from the servers all over the world, as well as fully automated security issue handling process. This is an amazing feature of the Imunify360. It's just like an unlimited number of security experts working for your security and safety round the clock!

Imunify360 works with XeonBD's in-house security solutions to provide a controlled and secure hosting environment for all our customers. Since the beginning, we’ve appreciated the stability, reliability, compatibility, efficiency, and security features of this software. Now our company is providing a complete solution to further protect from attacks.

We are the only web host that owns and operates a Tier-3 data center in Dhaka, Bangladesh. We also have a colocation data center in the US, and since 2005 we have been offering customers both technical and functional solutions appropriate to their business needs. As being one of the largest web solution providers in Bangladesh, security is always a primary concern for our team. Many of XeonBD's customer use open source CMS like WordPress, Joomla, Opencart, etc. and the vulnerabilities that are present due to running outdated applications or compromised scripts/plugins always create substantial issues for customers and their servers, whenever an account is compromised. But now, with the increased and out-of-the-box protection that Imunify360 offers, which includes IDS/IPS, WAF, Malware Scanning with the centralized incident management etc., it helps us protect our clients’ valuable assets. We found that all this makes an investment in Imunify360 worthwhile and a no-brainer.

To learn more about XeonBD, you can visit xeonbd.com or xeonbd.com/imunify360.

- - -

Imunify360 already secures over half a million websites and hundreds of web hosting providers run secure servers protected by Imunify360. Because this new provider directory helps both website owners and web hosts, we expect it to grow from just a few dozens of companies we had at launch, to hundreds of companies in a short period of time.

If you haven’t yet requested to be listed, email This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak913bd49d16932a9da03069ff6d691a6e').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addy913bd49d16932a9da03069ff6d691a6e = 'marketing' + '@'; addy913bd49d16932a9da03069ff6d691a6e = addy913bd49d16932a9da03069ff6d691a6e + 'cloudlinux' + '.' + 'com'; var addy_text913bd49d16932a9da03069ff6d691a6e = 'marketing' + '@' + 'cloudlinux' + '.' + 'com';document.getElementById('cloak913bd49d16932a9da03069ff6d691a6e').innerHTML += ''+addy_text913bd49d16932a9da03069ff6d691a6e+'<\/a>'; .
 

Feature announcement: New Activity feed

Postmark - Wed, 20/09/2017 - 16:15

If there’s one thing we know from talking to our customers it’s this: the Activity page in the Postmark app is probably the most important part of the UI for most of you. It’s kind of a weird page too. It’s a page you don’t really need... until you do, and then you really need it. Here are some of the things we know you use the Activity page for:

  • Track down emails when your customers say they can’t find it.
  • Troubleshoot when you notice some weird things going on like lots of bounces or spam complaints.
  • Checking that everything is going fine when a large batch of emails goes out. 

You use it for a very specific purpose each time, and then you leave and go about your business, forgetting about it until the next time. That is how we like it, of course. We don’t want to be an app you think about all the time. We just want to make it easy for you to run your app and not worry about emails.

From our customer feedback and research we also know that the activity page wasn’t perfect. The layout was somewhat cluttered, which impeded data clarity and comprehension. The ability to filter on specific events, emails, and recipients was fairly limited. In short, we realized that the focus of this page should be on clarity and the speed of finding the information you’re looking for. The Activity page has to be easy to understand and act on, and it needs to be possible to find the events you're looking for extremely quickly.

Here's what it looked like until today:

Old activity The old Activity page

So we set about redesigning this page with those goals in mind. Along the way we did usability testing on prototypes to make sure we provide you with the best possible experience. And today we’re excited to unveil the new Activity page to you.

Here are some of the highlights.

Increased information density and clarity

The new Postmark activity feed Brand new Activity page

The Activity page is like a news feed for your email. It shows every event that happens to your messages, but the way it was laid out before made it difficult to scan and get a good sense of what was going on. With the new Activity page we improved that in several ways:

  • We moved to a table layout with each associated message event on the left to make it easy to scan the page and spot any issues or errors.
  • We added a new “Delivered” event to so that you don’t have to go into an individual message page to see when a message was delivered. This was one of the confusing aspects of the previous Activity page, where many of you expected that “Sent” meant “Delivered”.
  • Speaking of “Sent”… we also changed the labeling of the “Sent” status to “Processed”, since that fits in more accurately with the other status labels as an indication of where each email is in the process of getting to your recipients’ inboxes.
  • We added a bunch of information-rich tool tips on hover states so that, in many cases, you don’t have to click through to the detailed message page any more to get the information you need.
Improved filtering experience

Since the main thing you want to do on this page is find a specific email (or group of emails), we made several improvements to search and filtering:

New filters in Postmark
  • We consolidated the filters into a single interface to make it easier to mix, match and combine.
  • You can now combine multiple filters together, for example to see all emails that had either a Hard Bounce or an ISP Block.
  • In the main text box we added a much-requested feature, which is to filter by specific sender email addresses.
Export to CSV 

We heard from many of you that you’d like to be able to export the results of a particular search to CSV directly from the UI, so we now allow you to do that up to a maximum of 500 records. If you’d like to export more than 500 records you can use the Messages API

We started this project with the goal of making it easier and faster for you find the emails you’re looking for. Our initial feedback on the prototypes were positive, and we’re really excited to hear what you think of the new page now that it’s live. Please let us know at support@postmarkapp.com if you have any feedback. Or if you’d like to have a face to face chat with me about it, feel free to set up some time here.

Categories: Technology

Beta: PHP for EasyApache 4 updated

CloudLinux - Wed, 20/09/2017 - 15:37

The new updated ea-php packages are available for download from our EA4 testing repository.

Changelog:

ea-php51-5.1.6-13.cloudlinux

ea-php52-5.2.17-16.cloudlinux

ea-php53-5.3.29-17.cloudlinux

ea-php54-5.4.45-40.cloudlinux.1

ea-php55-5.5.38-24.cloudlinux.1

ea-php56-5.6.31-3.cloudlinux.1

ea-php70-7.0.23-1.cloudlinux.1

ea-php71-7.1.9-1.cloudlinux.1

ea-php72-7.2.0-3.RC1.cloudlinux.1

  • ALTPHP-380: changed CRIU messaging;
  • ALTPHP-380: added user.ini lookup in home dir;
  • ALTPHP-381: removed autorequire for system libtidy.

Update command:

yum update ea-php* --enablerepo=cl-ea4-testing
Categories: Technology

Beta: Alt-PHP updated

CloudLinux - Wed, 20/09/2017 - 15:14

The new updated Alt-PHP packages are now available for download from our beta repository.

Note. For the new installations of Alt-PHP packages session.save_path will be changed from /tmp to /opt/alt/phpNN/var/lib/php/session, where NN corresponds to Alt-PHP version.

mod_lsapi support for alt-php72 will be added in the next release.

Changelog:

alt-php44-4.4.9-70

alt-php51-5.1.6-80

alt-php52-5.2.17-106

  • ALTPHP-380: added user.ini lookup in home dir;
  • ALTPHP-380: changed CRIU messaging;
  • ALTPHP-357: changed permissions for /opt/alt/phpXY/var/lib/php/session to 5733;
  • ALTPHP-357: session.save_path set to "/opt/alt/phpXY/var/lib/php/sessions";
  • ALTPHP-358: added symlinks for MultiPHP support (cPanel >= 11.66.0.11);
  • ALTPHP-381: built against latest version of tidy.

alt-php53-5.3.29-58

  • alt-phpXY-intl requires alt-libcu >= 57.1;
  • ALTPHP-380: added user.ini lookup in home dir;
  • ALTPHP-380: changed CRIU messaging;
  • ALTPHP-357: changed permissions for /opt/alt/phpXY/var/lib/php/session to 5733;
  • ALTPHP-357: session.save_path set to "/opt/alt/phpXY/var/lib/php/sessions";
  • ALTPHP-358: added symlinks for MultiPHP support (cPanel >= 11.66.0.11);
  • ALTPHP-381: built against latest version of tidy;
  • ALTPHP-365: added FPM support;
  • AAP-176: added correct exit from lve if the nproc limit is reached for php-fpm;
  • changed lve log levels for php-fpm.

alt-php54-5.4.45-41

alt-php55-5.5.38-23

alt-php56-5.6.31-6

alt-php70-7.0.23-4

alt-php71-7.1.9-4

  • alt-phpXY-intl requires alt-libcu >= 57.1;
  • ALTPHP-380: added user.ini lookup in home dir;
  • ALTPHP-380: changed CRIU messaging;
  • ALTPHP-357: changed permissions for /opt/alt/phpXY/var/lib/php/session to 5733;
  • ALTPHP-357: session.save_path set to "/opt/alt/phpXY/var/lib/php/sessions";
  • ALTPHP-358: added symlinks for MultiPHP support (cPanel >= 11.66.0.11);
  • ALTPHP-381: built against latest version of tidy;
  • AAP-176: added correct exit from lve if the nproc limit is reached for php-fpm;
  • changed lve log levels for php-fpm.

alt-php72-7.2.0-0.rc.2.2

  • initial release;
  • alt-phpXY-intl requires alt-libcu >= 57.1;
  • ALTPHP-380: added user.ini lookup in home dir;
  • ALTPHP-380: changed CRIU messaging;
  • ALTPHP-357: set permissions for /opt/alt/phpXY/var/lib/php/session to 5733;
  • ALTPHP-357: session.save_path set to "/opt/alt/phpXY/var/lib/php/sessions";
  • ALTPHP-358: added symlinks for MultiPHP support (cPanel >= 11.66.0.11);
  • ALTPHP-381: built against latest version of tidy;
  • AAP-176: added correct exit from lve if the nproc limit is reached for php-fpm;
  • changed lve log levels for php-fpm.

alt-php-config-1-21

  • ALTPHP-358: added script for MultiPHP symlinks reconfiguration.

To install please run the following command:

yum groupinstall alt-php --enablerepo=cloudlinux-updates-testing
Categories: Technology

#FuerzaMexico: A way to help Mexico Earthquake victims

CloudFlare - Wed, 20/09/2017 - 14:17
#FuerzaMexico: A way to help Mexico Earthquake victims

#FuerzaMexico: A way to help Mexico Earthquake victims Photo Credit: United Nations Photo (Flickr)

On September 19, 1985 Mexico City was hit with the most damaging earthquake in its history. Yesterday -exactly 32 years later- Mexico’s capital and neighbouring areas were hit again by a large earthquake that caused significant damage. While the scale of the destruction is still being assessed, countless people passed away and the lives of many have been disrupted. Today, many heroes are on the streets focusing on recovery and relief.

We at Cloudflare want to make it easy for people to help out those affected in central Mexico. The Mexico Earthquake app will allow visitors to your site to donate to one of the charities helping those impacted.

#FuerzaMexico: A way to help Mexico Earthquake victims

The Mexico Earthquake App takes two clicks to install and requires no code change. The charities listed are two well respected organizations that are on the ground helping people now.

Install Now

If you wanted to add your own custom list of charities for disaster relief or other causes, feel free to fork the source of this app and make your own.

#FuerzaMéxico: Una manera de apoyar a los damnificados del SismoMX

El 19 de septiembre de 1985 la Ciudad de México fue afectada por uno de los peores sismos en su historia. Ayer - exactamente 32 años después - la CDMX y áreas circunvecinas fueron afectadas por otro fuerte sismo. Aunque la escala de la destrucción todavía no se conoce a fondo, muchísimas personas han sufrido daños. Miles de héroes mexicanos se enfocan en búsqueda, rescate y reconstrucción.

En Cloudflare queremos poner nuestro granito de arena y asegurarnos que los donativos para los afectados puedan llegar de forma fácil. Nuestra app Mexico Earthquake permitirá a aquellos que visitan tu sitio web que donen a asociaciones civiles que apoyan a los damnificados.

Install Now

Si quieres agregar otras organizaciones y/o caridades, puedes modificar el código fuente disponible aquí.

Categories: Technology

EasyApache 4 updated

CloudLinux - Wed, 20/09/2017 - 13:31

The new updated packages are now available for download from our production repository.

Changelog:

ea-apache24 2.4.27-7.cloudlinux

  • CVE-2017-9798 fixed;
  • EA-6096: added note to mod_unique_id summary about performance degradation.

ea-apache24-config 1.0-113.cloudlinux

  • EA-6240: white list proxy subdomains in ModSec for SSL vhosts;
  • EA-6778: added warnings to ssl_vhost.default about sslcertificatekeyfile.

To update run the command:

yum update ea-apache24 ea-apache24-config
Categories: Technology

Beta: Alt-Ruby updated

CloudLinux - Wed, 20/09/2017 - 12:22

The new updated Alt-Ruby packages are available for download from our updates-testing repository.

Changelog:

alt-ruby22-2.2.8-24

alt-ruby23-2.3.5-16

alt-ruby24-2.4.2-4

  • CVE-2017-0898: buffer underrun vulnerability in Kernel.sprintf;
  • CVE-2017-10784: escape sequence injection vulnerability in the Basic authentication of WEBrick;
  • CVE-2017-14033: buffer underrun vulnerability in OpenSSL ASN1 decode;
  • CVE-2017-14064: heap exposure vulnerability in generating JSON.
  • fixed multiple vulnerabilities in RubyGems;
  • updated bundled libyaml to version 0.1.7.

Install command:

yum groupinstall alt-ruby --enablerepo=cloudlinux-updates-testing
Categories: Technology

LVE Manager and CageFS updated

CloudLinux - Wed, 20/09/2017 - 07:57

The new updated LVE Manager and CageFS packages are available for download from our production repository.

Changelog:

cagefs-6.0-56

  • CAG-753: cagefsctl --list does not create directories with 0777 permissions;
  • CAG-751: fixed errors with CageFS on the latest cPanel EDGE;
  • CAG-749: resolved conflict with CageFS and systemd;
  • CAG-746: made the cPanel Autodiscover script workable for CageFS-enabled users;
  • CAG-747: allowed whitespace lines in /etc/cagefs/proxy.commands (both in cagefs itself and --sanity-check command);
  • CAG-734: cleaned PHP sessions for Plesk based on session.gc_maxlifetime;
  • CAG-738: different session.save_path for each alt-php version;
  • CAG-739: made CageFS WHM plugin invisible for resellers in cPanel;
  • WEB-524: redirecting from ssl to non-ssl when using cagefs update in DA UI;
  • CAG-737: executing cagefsctl --remove-all while uninstall of cagefs package.

lvemanager-2.0-33

To update run:

yum update cagefs lvemanager
Categories: Technology

Pages

Subscribe to oakleys.org.uk aggregator - Technology
Additional Terms