Quantcast
Channel: User Frank Heikens - Database Administrators Stack Exchange
Viewing all articles
Browse latest Browse all 123

Answer by Frank Heikens for PostgreSQL procedural languages overhead (plpython / plsql / pllua...)

$
0
0

Is the context a big overhead? Can I use it for realtime data mapping (let's say 1000 queries/s))

Performance depends on hardware and complexity of your functions. I created an appliance that ran on a small 12-core server and a FusionIO-card (total costs euro 10000) and did about 2500 transactions per second with 20 concurrent users. Each transaction calls 29 stored procedures for processing the data and returning some useful information to the client. Some functions execute just one query, others a couple of queries. In total, it executes about 200000 INSERT, SELECT and UPDATE statements per second.

This is all written in PL/SQL, PL/pgSQL and PL/PerlU. And I'm pretty sure the system can run even faster when (some) functions are rewritten in C.

In this appliance, most performance comes from the SSD card. On a single rotating disk, we would never ever get this performance. Cheap SSD drives also fail, it works for an hour (because of the caching of the raid-card) and then it's game over. The FusionIO-card is expensive, but a very good investment when you're IO bound.


Viewing all articles
Browse latest Browse all 123

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>