Daniel Nicoletti

Brazil, São Paulo http://dantti.wordpress.com 60 Posts

Cutelyst 1.4.0 released, C100K ready.

Yes, it's not a typo.

Thanks to the last batch of improvements and with the great help of jemalloc, cutelyst-wsgi can do 100k request per second using a single thread/process on my i5 CPU. Without the use of jemalloc the rate was around 85k req/s.

This together with the EPoll event loop can really scale your web application, initially I thought that the option to replace the default glib (on Unix) event loop of Qt had no gain, but after increasing the connection number it handle them a lot better. With 256 connections the request per second using glib event loop get's to 65k req/s while the EPoll one stays at 90k req/s a lot closer to the number when only 32 connections is tested.

Beside these lovely numbers Matthias Fehring added a new Session backend memcached and a change to finally get translations to work on Grantlee templates. The cutelyst-wsgi got --socket-timeout, --lazy, many fixes, removal of usage of deprecated Qt API, and Unix signal handling seems to be working properly now.

Get it! https://github.com/cutelyst/cutelyst/archive/r1.4.0.tar.gz

Hang on FreeNode #cutelyst IRC channel or Google groups: https://groups.google.com/forum/#!forum/cutelyst

Have fun!

Cutelyst 1.3.0 released

Only 21 days after the last stable release and some huge progress was made.

The first big addition is a contribution by Matthias Fehring, which adds a validator module, allowing you to validate user input fast and easy. A multitude of user input types is available, such as email, IP address, JSON, date and many more. With a syntax that can be used in multiple threads and avoid recreating the parsing rules:

static Validator v({ new ValidatorRequired(QStringLiteral("username") });
if (v.validate(c,Validator::FillStashOnError)) { ... }

Then I wanted to replace uWSGI on my server and use cutelyst-wsgi, but although performance benchmark shows that NGINX still talks faster to cutelyst-wsgi using proxy_pass (HTTP), I wanted to have FastCGI or uwsgi protocol support.

Evaluating FastCGI vs uwsgi was somehow easy, FastCGI is widely supported and due a bad design decision uwsgi protocol has no concept of keep alive. So the client talks to NGINX with keep alive but NGINX when talking to your app keeps closing the connection, and this makes a huge difference, even if you are using UNIX domain sockets.

uWSGI has served us well, but performance and flexible wise it's not the right choice anymore, uWSGI when in async mode has a fixed number of workers, which makes forking take longer and user a lot of more RAM memory, it also doesn't support keep alive on any protocol, it will in 2.1 release (that nobody knows when will be release) support keep alive in HTTP but I still fail to see how that would scale with fixed resources.

Here are some numbers when benchmarking with a single worker on my laptop:

uWSGI 30k req/s (FastCGI protocol doesn't support keep conn)
uWSGI 32k req/s (uwsgi protocol that also doesn't support keeping connections)
cutelyst-wsgi 24k req/s (FastCGI keep_conn off)
cutelyst-wsgi 40k req/s (FastCGI keep_conn on)
cutelyst-wsgi 42k req/s (HTTP proxy_pass with keep conn on)

As you can see the uwsgi protocol is faster than FastCGI so if you still need uWSGI, use uwsgi protocol, but there's a clear win in using cutelyst-wsgi.

UNIX sockets weren't supported in cutelyst-wsgi and are now supported with a HACK, yeah sadly QLocalServer doesn't expose the socket description, plus another few stuff which are waiting for response on their bug reports (maybe I find time to write and ask for review), so I inspect the children() until a QSocketNotifier is found and there I get it. Works great but might break in future Qt releases I know, at least it won't crash.

With UNIX sockets command line options like --uid, --gid, --chown-socket, --socket-access, as well as systemd notify integration.

All of this made me review some code and realize a bad decision I've made which was to store headers in lower case, since uWSGI and FastCGI protocol bring them in upper case form I was wasting time converting them, if the request comes by HTTP protocol it's case insensitive so we have to normalize anyway. This behavior is also used by frameworks like Django and the change brought a good performance boost, this will only break your code if you use request headers in your Grantlee templates (which is uncommon and we still have few users). When normalizing headers in Headers class it was causing QString to detach also giving us a performance penalty, it will still detach if you don't try to access/set the headers in the stored form (ie CONTENT_TYPE).

These changes made for a boost from 60k req/s to 80k req/s on my machine.

But we are not done, Matthias Fehring also found a security issue, I dunno when but some change I did break the code that returned an invalid user which was used to check if the authentication was successful, leading a valid username to authenticate even when the logs showed that password didn't match, with his patch I added unit tests to make sure this never breaks again.

And to finish today I wrote unit test to test PBKDF2 according to RFC 6070, and while there I noticed that the code could be faster, before my changes all tests were taking 44s, and now take 22s twice as fast is important since it's a CPU bound code that needs to be fast to authenticate users without wasting your CPU cycles.

Get it! https://github.com/cutelyst/cutelyst/archive/r1.3.0.tar.gz

Oh and while the FreeNode #cutelyst IRC channel is still empty I created a Cutelyst on Google groups: https://groups.google.com/forum/#!forum/cutelyst

Have fun!

Cutelyst 1.2.0 released

Cutelyst the C++/Qt web framework has a new release.

  • Test coverage got some additions to avoid breaks in future.
  • Responses without content-length (chunked or close) are now handled properly.
  • StatusMessage plugin got new methods easier to use and has the first deprecated API too.
  • Engine class now has a struct with the request subclass should create, benchmarks showed this as a micro optimization but I've heard function with many arguments (as it was before) are bad on ARM so I guess this is the first optimization for ARM :)
  • Chained dispatcher finally got a performance improvement, I didn't benchmark it but it should be much faster now.
  • Increased the usage of lambdas when the called function was small/simple, for some reason they reduce the library size so I guess it's a good thing...
  • Sql helper classes can now use QThread::objectName() which has the worker thread id as it's name so writing thread safe code is easier.
  • WSGI got static-map and static-map2 options both work the same way as in uWSGI allowing you to serve static files without the need of uWSGI or a webserver.
  • WSGI got both auto-reload and touch-reload implemented, which help a lot on the development process.
  • Request::addressString() was added to make it easier to get the string representation of the client IP also removing the IPv6 prefix if IPv4 conversion succeeds.
  • Grantlee::View now exposes the Grantlee::Engine pointer so one can add filters to it (usefull for i18n)
  • Context got a locale() method to help dealing with translations
  • Cutelyst::Core got ~25k smaller
  • Some other small bug fixes and optimizations....

For 1.3.0 I hope WSGI module can deal with FastCGI and/or uwsgi protocols, as well as helper methods to deal with i18n. But a CMlyst release might come first :)

Enjoy https://github.com/cutelyst/cutelyst/archive/r1.2.0.tar.gz

Cutelyst 1.1.2 released

Cutelyst the C++/Qt Web Framework just got a new release.

Yesterday I was going to do the 1.1.0 release, after running tests, and being happy with current state I wrote this blog post, but before I publish I went to www.cutelyst.org CMS (CMlyst) to paste the same post, and I got hit by a bug I had on 1.0.0, which at time I thought it was a bug in CMlyst, so decided to investigate and found a few other bugs with our WSGI not properly changing the current directory and not replacing the + sign with an ' ' space when parsing formdata, which was the main bug. So did the fixes tagged as 1.1.1 and today found that automatically setting Content-Length wasn't working when the View rendered nothing.

Cutelyst Core and Plugins API/ABI are stable but Cutelyst-WSGI had to have changes, I forgot to expose the needed API for it to actually be useful outside Cutelyst so this isn't a problem (nobody would be able to use it anyway). So API stability got extended to all components now.

Thanks to a KDAB video about perf I finally managed to use it and was able to detect two slow code paths in Cutelyst-WSGI.

The first and that was causing 7% overhead was due QTcpSocket emitting readyRead() signal and in the HttpParser code I was calling sender() to get the socket, turns out the sender() method implementation is not as simple as I thought it was and there is a QMutexLocker which maybe caused some thread to wait on it (not sure really), so now for each QTcpSocket there is an HttpParser class, this uses a little more memory but the code got faster. Hopefully in Qt6 the signal can have a readyRead(QIODevice *) signature.

The second was that I end up using QByteArrayMatcher to match \r\n to get request headers, perl was showing it was causing 1.2% overhead, doing some benchmarks I replaced it by a code that was faster. Using a single thread on my Intel I5 I can process 85k req/s, so things got indeed a little faster.

It got a contribution from a new developer on View::Email, which made me do a new simplemail-qt release (1.3.0), the code there still needs love regarding performance but I didn't manage to find time to improve it yet.

This new release has some new features:

  • EPoll event loop was added to Linux builds, this is supposedly to be faster than default GLib but I couldn't measure the difference properly, this is optional and requires CUTELYST_EVENT_LOOP_EPOLL=1 environment to be set.
  • ViewEmail got methods to set/get authentication method and connection type
  • WSGI: can now set TCP_NODELAY, TCP KEEPALIVE, SO_SNDBUF and SO_RCVBUF on command line, these use the same name as in uwsgi
  • Documentation was improved a bit and can be generated now with make docs, which doesn't require me to changing Cutelyst version manually anymore

And also some important bug fixes:

  • Lazy evaluation of Request methods was broken on 1.0.0
  • WSGI: Fixed loading configs and parsing command line option

Download https://github.com/cutelyst/cutelyst/archive/r1.1.2.tar.gz and have fun!

Cutelyst benchmarks on TechEmpower round 13

Right on my first blog post about Cutelyst users asked me about more "realistic" benchmarks and mentioned TechEmpower benchmarks. Though it was a sort of easy task to build the tests at that time it didn't seem to be useful to waste time as Cutelyst was moving target.

Around 0.12 it became clear that the API was nearly freezing and everyone loves benchmarks so it would serve as a "marketing" thing. Since the beginning of the project I've learned lots of optimizations which made the code get faster and faster, there is still room from improvement, but changes now need to be carefully measured.

Cutelyst is the second Web Framework on TechEmpower benchmarks that uses Qt, the other one being treefrog, but Qt's appearance was noticed on their blog:

https://www.techempower.com/blog/2016/11/16/framework-benchmarks-round-13/

The cutelyst-thread tests suffered from a segfault (that was restarted by master process), the fix is in 1.0.0 release but it was too late for round 13, so round 14 it will get even better results.

https://www.techempower.com/benchmarks/

If you want to help MongoDB/Grantlee/Clearsilver tests are still missing :D