how stackless python can be fast for concurrency? -


Stackless dragon did not make good use of multi-core, so where should there be faster / multiprocusing?

All benchmark pythons use stackless python taskless to compare lock lock and queue, which is unreasonable, due to always low efficiency in lock lock

If using single thread function calls without lock it should be efficient as stackless dragon

Focus on first functionality, and second functionality (unless you do not know that you need to Subscriptions).

Most of the time on the server is spent with I / O, so multi-core does not help so much. If it is mostly I / O with which you are working, multi-threading python can be simple answer.

If the CPU of the server requests is intensive, then if there is a basic process (whether multi-threaded or not), and related child processes, it is best.

If you really want to scale, you can look at a different platform, such as Ergeling if you really want to use scale and python, you distribute distributed Erlang on a distributed cluster Ports can be viewed with managed Python processes in the form of.

Many options, but until you've been large for some time, you can take a simple perspective.

Quick release, frequent release.


Comments

Popular posts from this blog

c# - How to capture HTTP packet with SharpPcap -

php - Multiple Select with Explode: only returns the word "Array" -

jquery - SimpleModal Confirm fails to submit form -