Daniel Nicoletti

Brazil, São Paulo http://dantti.wordpress.com 28 Posts

Cutelyst benchmarks on TechEmpower round 14

The new TechEmpower benchmarks results are finally out, round 14 took almost 6 months to be ready, and while a more frequent testing is desirable this gave us plenty of time to improve things.

Last round a bug was crashing our threaded benchmarks (which wasted time restarting), besides having that bug fixed the most notably changes that improved our performance were:

  • Use of a custom Epoll even dispatcher instead of default glib one (the tests without epoll in it's name are using the default glib)
  • jemalloc, this brought the coolest boost, you just need link to it (or use LD_PRELOAD) and memory allocation magically get's faster (around 20% for the tests)
  • CPU affinity that helps the scheduler to keep each thread/process pinned to a CPU core.
  • SO_REUSEPORT which on Linux helps to get connections evenly distributed among each thread/process
  •  Several code optimizations thanks to perf

I'm very glad with our results, Cutelyst managed to stay with the top performers which really pays off all the hard work, it also provide us with good numbers to show for people interested in.

Check it out https://www.techempower.com/benchmarks/

Cutelyst 1.6.0 released, to infinity and beyond!

Once 1.5.0 was release I thought the next release would be a small one, it started with a bunch of bug fixes, Simon Wilper made a contribution to Utils::Sql, basically when things get out to production you find bugs, so there were tons of fixes to WSGI module.

Then TechEmpower benchmarks first preview for round 14 came out, Cutelyst performance was great, so I was planning to release 1.6.0 as it was but second preview fixed a bug that Cutelyst results were scaled up, so our performance was worse than on round 13, and that didn't make sense since now it had jemalloc and a few other improvements.

Actually the results on the 40+HT core server were close to the one I did locally with a single thread.

Looking at the machine state it was clear that only a few (9) workers were running at the same time, I then decided to create an experimental connection balancer for threads. Basically the main thread accepts incoming connections and evenly pass them to each thread, this of course puts a new bottleneck on the main thread. Once the code was ready which end up improving other parts of WSGI I became aware of SO_REUSEPORT.

The socket option reuse port is available on Linux >3.9, and different from BSD it implements a simple load balancer. This obsoleted my thread balancer but it still useful on !Linux. This option is also nicer since it works for process as well.

With 80 cores there's still the change that the OS scheduler put most of your threads on the same cores, and maybe even move them when under load. So an option for setting a CPU affinity was also added, this allows for each work be pinned to one or more cores evenly. It uses the same logic as uwsgi.

Now that WSGI module supported all these features preview 3 of benchmarks came out and the results where still terrible... further investigation revealed that a variable supposed to be set with CPU core count was set to 8 instead of 80. I'm sure all this work did improve performance for servers with a lots of cores so in the end the wrong interpretation was good after all :)

Preview 4 came out and we are back to the top, I'll do another post once it's final.

Code name "to infinity and beyond" came to head due scalability options it got :D

Last but not least I did my best to get rid of doxygen missing documentation warnings.

Have fun https://github.com/cutelyst/cutelyst/archive/v1.6.0.tar.gz

Cutelyst 1.5.0 released, I18N and HTTPS built-in

Cutelyst the C++/Qt web framework just got a new stable release.

Right after last release Matthias Fehring made another important contribution adding support for internationalization in Cutelyst, you can now have your Grantlee templates properly translated depending on user setting.

Then on IRC an user asked if Cutelyst-WSGI had HTTPS support, which it didn't, you could enable HTTPS (as I do) using NGINX in front of your application or using uwsgi, but of course having that build-in Cutelyst-WSGI is a lot more convenient especially since his use would be for embedded devices.

Cutelyst-WSGI also got support for --umask, --pidfile, --pidfile2 and --stop that will send a signal to stop the instance based on the pidfile provided, better documentation. Fixes for respawning and cheaping workers, and since I have it now on production of all my web applications FastCGI got some critical fixes.

The cutelyst command was updated to use WSGI library to work on Windows and OSX without requiring uwsgi, making development easier.

www.cutelyst.org got Cutelyst logo, and an updated CMlyst version, though the site still looks ugly... .

Download here: https://github.com/cutelyst/cutelyst/archive/v1.5.0.tar.gz .

Have fun! .

Cutelyst 1.4.0 released, C100K ready.

Yes, it's not a typo.

Thanks to the last batch of improvements and with the great help of jemalloc, cutelyst-wsgi can do 100k request per second using a single thread/process on my i5 CPU. Without the use of jemalloc the rate was around 85k req/s.

This together with the EPoll event loop can really scale your web application, initially I thought that the option to replace the default glib (on Unix) event loop of Qt had no gain, but after increasing the connection number it handle them a lot better. With 256 connections the request per second using glib event loop get's to 65k req/s while the EPoll one stays at 90k req/s a lot closer to the number when only 32 connections is tested.

Beside these lovely numbers Matthias Fehring added a new Session backend memcached and a change to finally get translations to work on Grantlee templates. The cutelyst-wsgi got --socket-timeout, --lazy, many fixes, removal of usage of deprecated Qt API, and Unix signal handling seems to be working properly now.

Get it! https://github.com/cutelyst/cutelyst/archive/r1.4.0.tar.gz

Hang on FreeNode #cutelyst IRC channel or Google groups: https://groups.google.com/forum/#!forum/cutelyst

Have fun!

Cutelyst 1.3.0 released

Only 21 days after the last stable release and some huge progress was made.

The first big addition is a contribution by Matthias Fehring, which adds a validator module, allowing you to validate user input fast and easy. A multitude of user input types is available, such as email, IP address, JSON, date and many more. With a syntax that can be used in multiple threads and avoid recreating the parsing rules:

static Validator v({ new ValidatorRequired(QStringLiteral("username") });
if (v.validate(c,Validator::FillStashOnError)) { ... }

Then I wanted to replace uWSGI on my server and use cutelyst-wsgi, but although performance benchmark shows that NGINX still talks faster to cutelyst-wsgi using proxy_pass (HTTP), I wanted to have FastCGI or uwsgi protocol support.

Evaluating FastCGI vs uwsgi was somehow easy, FastCGI is widely supported and due a bad design decision uwsgi protocol has no concept of keep alive. So the client talks to NGINX with keep alive but NGINX when talking to your app keeps closing the connection, and this makes a huge difference, even if you are using UNIX domain sockets.

uWSGI has served us well, but performance and flexible wise it's not the right choice anymore, uWSGI when in async mode has a fixed number of workers, which makes forking take longer and user a lot of more RAM memory, it also doesn't support keep alive on any protocol, it will in 2.1 release (that nobody knows when will be release) support keep alive in HTTP but I still fail to see how that would scale with fixed resources.

Here are some numbers when benchmarking with a single worker on my laptop:

uWSGI 30k req/s (FastCGI protocol doesn't support keep conn)
uWSGI 32k req/s (uwsgi protocol that also doesn't support keeping connections)
cutelyst-wsgi 24k req/s (FastCGI keep_conn off)
cutelyst-wsgi 40k req/s (FastCGI keep_conn on)
cutelyst-wsgi 42k req/s (HTTP proxy_pass with keep conn on)

As you can see the uwsgi protocol is faster than FastCGI so if you still need uWSGI, use uwsgi protocol, but there's a clear win in using cutelyst-wsgi.

UNIX sockets weren't supported in cutelyst-wsgi and are now supported with a HACK, yeah sadly QLocalServer doesn't expose the socket description, plus another few stuff which are waiting for response on their bug reports (maybe I find time to write and ask for review), so I inspect the children() until a QSocketNotifier is found and there I get it. Works great but might break in future Qt releases I know, at least it won't crash.

With UNIX sockets command line options like --uid, --gid, --chown-socket, --socket-access, as well as systemd notify integration.

All of this made me review some code and realize a bad decision I've made which was to store headers in lower case, since uWSGI and FastCGI protocol bring them in upper case form I was wasting time converting them, if the request comes by HTTP protocol it's case insensitive so we have to normalize anyway. This behavior is also used by frameworks like Django and the change brought a good performance boost, this will only break your code if you use request headers in your Grantlee templates (which is uncommon and we still have few users). When normalizing headers in Headers class it was causing QString to detach also giving us a performance penalty, it will still detach if you don't try to access/set the headers in the stored form (ie CONTENT_TYPE).

These changes made for a boost from 60k req/s to 80k req/s on my machine.

But we are not done, Matthias Fehring also found a security issue, I dunno when but some change I did break the code that returned an invalid user which was used to check if the authentication was successful, leading a valid username to authenticate even when the logs showed that password didn't match, with his patch I added unit tests to make sure this never breaks again.

And to finish today I wrote unit test to test PBKDF2 according to RFC 6070, and while there I noticed that the code could be faster, before my changes all tests were taking 44s, and now take 22s twice as fast is important since it's a CPU bound code that needs to be fast to authenticate users without wasting your CPU cycles.

Get it! https://github.com/cutelyst/cutelyst/archive/r1.3.0.tar.gz

Oh and while the FreeNode #cutelyst IRC channel is still empty I created a Cutelyst on Google groups: https://groups.google.com/forum/#!forum/cutelyst

Have fun!