{"id":8431,"date":"2024-12-17T10:00:50","date_gmt":"2024-12-17T09:00:50","guid":{"rendered":"https:\/\/www.credativ.de\/?p=8431"},"modified":"2024-12-16T12:47:42","modified_gmt":"2024-12-16T11:47:42","slug":"quick-benchmark-improvements-to-large-object-dumping-in-postgres-17","status":"publish","type":"post","link":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/","title":{"rendered":"Quick Benchmark: Improvements to Large Object Dumping in Postgres 17"},"content":{"rendered":"<p>Version 17 of PostgreSQL has been released for a while. One of the many features is a change by Tom Lane called <a href=\"https:\/\/git.postgresql.org\/gitweb\/?p=postgresql.git;a=commitdiff;h=a45c78e3284b269085e9a0cbd0ea3b236b7180fa\">\u201cRearrange pg_dump\u2019s handling of large objects for better efficiency\u201d<\/a>. In the past, we have seen our customers have several problems with a large number of large objects being a performance issue for dump\/restore. The main reason for this is that large objects are quite unlike to TOAST (The Oversized Attribute Storage Technique): while TOASTed data is completely transparent to the user, large objects are stored out-of-line in a <code>pg_largeboject<\/code> table with a link to the particular row in that table being an OID in the table itself.<\/p>\n<h2 id=\"introduction-to-large-objects\">Introduction To Large<br \/>\nObjects<\/h2>\n<p>Here is an example on how large objects can be used:<\/p>\n<pre><code>postgres=# CREATE TABLE test(id BIGINT, blob OID);\r\nCREATE TABLE\r\npostgres=# INSERT INTO test VALUES (1, lo_import('\/etc\/issue.net'));\r\nINSERT 0 1\r\npostgres=# SELECT * FROM test;\r\n id | blob\r\n----+-------\r\n  1 | 33280\r\n(1 row)\r\n\r\npostgres=# SELECT * FROM pg_largeobject;\r\n loid  | pageno |                    data\r\n-------+--------+--------------------------------------------\r\n 33280 |      0 | \\x44656269616e20474e552f4c696e75782031320a\r\n(1 row)\r\n\r\npostgres=# SELECT lo_export(test.blob, '\/tmp\/foo') FROM test;\r\n lo_export\r\n-----------\r\n         1\r\n(1 row)\r\n\r\npostgres=# SELECT pg_read_file('\/tmp\/foo');\r\n    pg_read_file\r\n---------------------\r\n Debian GNU\/Linux 12+\r\n\r\n(1 row)\r\n\r\npostgres=# INSERT INTO test VALUES (1, lo_import('\/etc\/issue.net'));\r\nINSERT 0 1<\/code><\/pre>\n<p>Now if we dump the database in custom format with both version 16 and 17 of <code>pg_dump<\/code> and then use <code>pg_restore -l<\/code> to display the table of contents (TOC), we see a difference:<\/p>\n<pre><code>$ for version in 16 17; do \/usr\/lib\/postgresql\/$version\/bin\/pg_dump -Fc -f lo_test_$version.dmp; \\\r\n&gt; pg_restore -l lo_test_$version.dmp | grep -v ^\\; &gt; lo_test_$version.toc; done\r\n$ diff -u lo_test_{16,17}.toc\r\n--- lo_test_16.toc  2024-12-11 09:05:46.550667808 +0100\r\n+++ lo_test_17.toc  2024-12-11 09:05:46.594670235 +0100\r\n@@ -1,5 +1,4 @@\r\n 215; 1259 33277 TABLE public test postgres\r\n-3348; 2613 33280 BLOB - 33280 postgres\r\n-3349; 2613 33281 BLOB - 33281 postgres\r\n+3348; 2613 33280 BLOB METADATA - 33280..33281 postgres\r\n 3347; 0 33277 TABLE DATA public test postgres\r\n-3350; 0 0 BLOBS - BLOBS\r\n+3349; 0 0 BLOBS - 33280..33281 postgres<\/code><\/pre>\n<p>The dump with version 17 combines the large object metadata into <code>BLOB METADATA<\/code>, creating only one entry in the TOC for them.<\/p>\n<p>Further, if we use the directory dump format, we see that <code>pg_dump<\/code> creates a file for each large object:<\/p>\n<pre><code>$ pg_dump -Fd -f lo_test.dir\r\n$ ls lo_test.dir\/\r\n3347.dat.gz  blob_33280.dat.gz  blob_33281.dat.gz  blobs.toc  toc.dat<\/code><\/pre>\n<p>If there are only a few large objects, this is not a problem. But if the large object mechanism is used to create hundreds of thousands or millions of large objects, this will become a serious problem for <code>pg_dump\/pg_restore<\/code>.<\/p>\n<p>Finally, in order to fully remove the large objects, it does not suffice to drop the table, the large object needs to be unlinked as well:<\/p>\n<pre><code>postgres=# DROP TABLE test;\r\nDROP TABLE\r\npostgres=# SELECT COUNT(*) FROM pg_largeobject;\r\n count\r\n-------\r\n     2\r\n(1 row)\r\n\r\npostgres=# SELECT lo_unlink(loid) FROM pg_largeobject;\r\n lo_unlink\r\n-----------\r\n         1\r\n         1\r\n(2 rows)\r\n\r\npostgres=# SELECT COUNT(*) FROM pg_largeobject;\r\n count\r\n-------\r\n     0\r\n(1 row)<\/code><\/pre>\n<h2 id=\"benchmark\">Benchmark<\/h2>\n<p>We generate one million large objects in a PostgreSQL 16 instance:<\/p>\n<pre><code>lotest=# SELECT lo_create(id) FROM generate_series(1,1000000) AS id;\r\n lo_create\r\n-----------\r\n         1\r\n         2\r\n[...]\r\n    999999\r\n   1000000\r\n(1000000 rows)\r\n\r\nlotest=# SELECT COUNT(*) FROM pg_largeobject_metadata;\r\n  count\r\n---------\r\n 1000000\r\n(1 row)\r\n(1 row)<\/code><\/pre>\n<p>We now dump the database with <code>pg_dump<\/code> from both version 16 and 17, first as a custom and then as a directory dump, using the <code>time<\/code> utility to track runtime and memory usage:<\/p>\n<pre><code>$ for version in 16 17; do echo -n \"$version: \"; \\\r\n&gt; \/usr\/bin\/time -f '%E %Mk mem' \/usr\/lib\/postgresql\/$version\/bin\/pg_dump \\\r\n&gt; -Fc -f lo_test_$version.dmp lotest; done\r\n16: 0:36.73 755692k mem\r\n17: 0:34.69 217776k mem\r\n$ for version in 16 17; do echo -n \"$version: \"; \\\r\n&gt; \/usr\/bin\/time -f '%E %Mk mem' \/usr\/lib\/postgresql\/$version\/bin\/pg_dump \\\r\n&gt; -Fd -f lo_test_$version.dir lotest; done\r\n16: 8:23.48 755624k mem\r\n17: 7:51.04 217980k mem<\/code><\/pre>\n<p>Dumping using the directory format takes much longer than with the custom format, while the amount of memory is very similar for both. The runtime is slightly lower for version 17 compared to version 16, but the big difference is in the used memory, which is 3,5x smaller.<\/p>\n<p>Also, when looking at the file size for the custom dump or the file size of the table-of-contents (TOC) file, the difference becomes very clear:<\/p>\n<pre><code>$ ls -lh lo_test_1?.dmp | awk '{print $5 \" \" $9}'\r\n211M lo_test_16.dmp\r\n29M lo_test_17.dmp\r\n$ ls -lh lo_test_1?.dir\/toc.dat | awk '{print $5 \" \" $9}'\r\n185M lo_test_16.dir\/toc.dat\r\n6,9M lo_test_17.dir\/toc.dat<\/code><\/pre>\n<p>The custom dump is roughly 7x smaller while the TOC file of the directory dump is around 25x smaller. We also tested for different numbers of large objects (from 50k to 1.5 million) and found only a slight variance in those ratios: the used memory ratio increases from around 2x at 50k to 4x at 1.5 million while the TOC ratio goes down from around 30x at 50k to 25x at 1.5 million.<\/p>\n<h2 id=\"conclusion\">Conclusion<\/h2>\n<p>The changes regarding dumps of large objects in Postgres 17 are very welcome for users with a huge number of large objects. Memory requirements are much lower on PostgreSQL 17 compared to earlier versions, both for dumps in custom and directory format.<\/p>\n<p>Unfortunately, neither the number of files in the directory nor the directory size changes much, each large object is still dumped as its own file, which can lead to problems if there are a lot files:<\/p>\n<pre><code>$ for version in 16 17; do echo -n \"$version: \"; find lo_test_$version.dir\/ | wc -l; done\r\n16: 1000003\r\n17: 1001002\r\n$ du -s -h lo_test_??.dir\r\n4,1G    lo_test_16.dir\r\n3,9G    lo_test_17.dir<\/code><\/pre>\n<p>This might be an area for future improvements in Postgres 18 and beyond.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Version 17 of PostgreSQL has been released for a while. One of the many features is a change by Tom Lane called \u201cRearrange pg_dump\u2019s handling of large objects for better efficiency\u201d. In the past, we have seen our customers have several problems with a large number of large objects being a performance issue for dump\/restore. [&hellip;]<\/p>\n","protected":false},"author":37,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_improvement_type_select":"improve_an_existing","_thumb_yes_seoaic":false,"_frame_yes_seoaic":false,"seoaic_generate_description":"","seoaic_improve_instructions_prompt":"","seoaic_rollback_content_improvement":"","seoaic_idea_thumbnail_generator":"","thumbnail_generated":false,"thumbnail_generate_prompt":"","seoaic_article_description":"","seoaic_article_subtitles":[],"footnotes":""},"categories":[1708],"tags":[1887,2005],"class_list":["post-8431","post","type-post","status-publish","format-standard","hentry","category-postgresql-en","tag-planetpostgresql","tag-postgresql17"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Quick Benchmark: Improvements to Large Object Dumping in Postgres 17 - credativ\u00ae<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Quick Benchmark: Improvements to Large Object Dumping in Postgres 17\" \/>\n<meta property=\"og:description\" content=\"Version 17 of PostgreSQL has been released for a while. One of the many features is a change by Tom Lane called \u201cRearrange pg_dump\u2019s handling of large objects for better efficiency\u201d. In the past, we have seen our customers have several problems with a large number of large objects being a performance issue for dump\/restore. [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/\" \/>\n<meta property=\"og:site_name\" content=\"credativ\u00ae\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/credativDE\/\" \/>\n<meta property=\"article:published_time\" content=\"2024-12-17T09:00:50+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.credativ.de\/wp-content\/uploads\/2019\/07\/Portfolio-Loesungen.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"800\" \/>\n\t<meta property=\"og:image:height\" content=\"550\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Michael Banck\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@credativde\" \/>\n<meta name=\"twitter:site\" content=\"@credativde\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Michael Banck\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/\"},\"author\":{\"name\":\"Michael Banck\",\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/#\\\/schema\\\/person\\\/038c79105ce9b5fd885631da3f806698\"},\"headline\":\"Quick Benchmark: Improvements to Large Object Dumping in Postgres 17\",\"datePublished\":\"2024-12-17T09:00:50+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/\"},\"wordCount\":533,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/#organization\"},\"keywords\":[\"planetpostgresql\",\"postgresql17\"],\"articleSection\":[\"PostgreSQL\u00ae\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/#respond\"]}],\"copyrightYear\":\"2024\",\"copyrightHolder\":{\"@id\":\"https:\\\/\\\/www.credativ.de\\\/#organization\"}},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/\",\"url\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/\",\"name\":\"Quick Benchmark: Improvements to Large Object Dumping in Postgres 17 - credativ\u00ae\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/#website\"},\"datePublished\":\"2024-12-17T09:00:50+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Quick Benchmark: Improvements to Large Object Dumping in Postgres 17\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/\",\"name\":\"credativ GmbH\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":[\"Organization\",\"Place\"],\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/#organization\",\"name\":\"credativ\u00ae\",\"url\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/\",\"logo\":{\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/#local-main-organization-logo\"},\"image\":{\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/#local-main-organization-logo\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/credativDE\\\/\",\"https:\\\/\\\/x.com\\\/credativde\",\"https:\\\/\\\/mastodon.social\\\/@credativde\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/credativ-gmbh\",\"https:\\\/\\\/www.instagram.com\\\/credativ\\\/\"],\"description\":\"Die credativ GmbH ist ein f\u00fchrendes, auf Open Source Software spezialisiertes IT-Dienstleistungs- und Beratungsunternehmen. Wir bieten umfassende und professionelle Services, von Beratung und Infrastruktur-Betrieb \u00fcber 24\\\/7 Support bis hin zu individuellen L\u00f6sungen und Schulungen. Unser Fokus liegt auf dem ganzheitlichen Management von gesch\u00e4ftskritischen Open-Source-Systemen, darunter Betriebssysteme (z.B. Linux), Datenbanken (z.B. PostgreSQL), Konfigurationsmanagement (z.B. Ansible, Puppet) und Virtualisierung. Als engagierter Teil der Open-Source-Community unterst\u00fctzen wir unsere Kunden dabei, die Vorteile freier Software sicher, stabil und effizient in ihrer IT-Umgebung zu nutzen.\",\"legalName\":\"credativ GmbH\",\"foundingDate\":\"2025-03-01\",\"duns\":\"316387060\",\"numberOfEmployees\":{\"@type\":\"QuantitativeValue\",\"minValue\":\"11\",\"maxValue\":\"50\"},\"address\":{\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/#local-main-place-address\"},\"geo\":{\"@type\":\"GeoCoordinates\",\"latitude\":\"51.1732374\",\"longitude\":\"6.392010099999999\"},\"telephone\":[\"+4921619174200\",\"08002733284\"],\"contactPoint\":{\"@type\":\"ContactPoint\",\"telephone\":\"08002733284\",\"email\":\"vertrieb@credativ.de\"},\"openingHoursSpecification\":[{\"@type\":\"OpeningHoursSpecification\",\"dayOfWeek\":[\"Monday\",\"Tuesday\",\"Wednesday\",\"Thursday\",\"Friday\"],\"opens\":\"09:00\",\"closes\":\"17:00\"},{\"@type\":\"OpeningHoursSpecification\",\"dayOfWeek\":[\"Saturday\",\"Sunday\"],\"opens\":\"00:00\",\"closes\":\"00:00\"}],\"email\":\"info@credativ.de\",\"areaServed\":\"D-A-CH\",\"vatID\":\"DE452151696\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/#\\\/schema\\\/person\\\/038c79105ce9b5fd885631da3f806698\",\"name\":\"Michael Banck\",\"description\":\"Michael Banck ist seit 2009 Mitarbeiter der credativ GmbH, sowie seit 2001 Mitglied des Debian Projekts und auch in weiteren Open Source Projekten aktiv. Als Mitglied des Datenbank-Teams von credativ hat er in den letzten Jahren verschiedene Kunden bei der L\u00f6sung von Problemen mit und dem t\u00e4glichen Betrieb von PostgreSQL\u00ae, sowie bei der Einf\u00fchrung von Hochverf\u00fcgbarkeits-L\u00f6sungen im Bereich Datenbanken unterst\u00fctzt und beraten.\"},{\"@type\":\"PostalAddress\",\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/#local-main-place-address\",\"streetAddress\":\"Hennes-Weisweiler-Allee 23\",\"addressLocality\":\"M\u00f6nchengladbach\",\"postalCode\":\"41179\",\"addressRegion\":\"Deutschland\",\"addressCountry\":\"DE\"},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.credativ.de\\\/en\\\/blog\\\/postgresql-en\\\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\\\/#local-main-organization-logo\",\"url\":\"https:\\\/\\\/www.credativ.de\\\/wp-content\\\/uploads\\\/2025\\\/04\\\/credativ-logo-right.svg\",\"contentUrl\":\"https:\\\/\\\/www.credativ.de\\\/wp-content\\\/uploads\\\/2025\\\/04\\\/credativ-logo-right.svg\",\"caption\":\"credativ\u00ae\"}]}<\/script>\n<meta name=\"geo.placename\" content=\"M\u00f6nchengladbach\" \/>\n<meta name=\"geo.position\" content=\"51.1732374;6.392010099999999\" \/>\n<meta name=\"geo.region\" content=\"Germany\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Quick Benchmark: Improvements to Large Object Dumping in Postgres 17 - credativ\u00ae","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/","og_locale":"en_US","og_type":"article","og_title":"Quick Benchmark: Improvements to Large Object Dumping in Postgres 17","og_description":"Version 17 of PostgreSQL has been released for a while. One of the many features is a change by Tom Lane called \u201cRearrange pg_dump\u2019s handling of large objects for better efficiency\u201d. In the past, we have seen our customers have several problems with a large number of large objects being a performance issue for dump\/restore. [&hellip;]","og_url":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/","og_site_name":"credativ\u00ae","article_publisher":"https:\/\/www.facebook.com\/credativDE\/","article_published_time":"2024-12-17T09:00:50+00:00","og_image":[{"width":800,"height":550,"url":"https:\/\/www.credativ.de\/wp-content\/uploads\/2019\/07\/Portfolio-Loesungen.jpg","type":"image\/jpeg"}],"author":"Michael Banck","twitter_card":"summary_large_image","twitter_creator":"@credativde","twitter_site":"@credativde","twitter_misc":{"Written by":"Michael Banck","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/#article","isPartOf":{"@id":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/"},"author":{"name":"Michael Banck","@id":"https:\/\/www.credativ.de\/en\/#\/schema\/person\/038c79105ce9b5fd885631da3f806698"},"headline":"Quick Benchmark: Improvements to Large Object Dumping in Postgres 17","datePublished":"2024-12-17T09:00:50+00:00","mainEntityOfPage":{"@id":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/"},"wordCount":533,"commentCount":0,"publisher":{"@id":"https:\/\/www.credativ.de\/en\/#organization"},"keywords":["planetpostgresql","postgresql17"],"articleSection":["PostgreSQL\u00ae"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/#respond"]}],"copyrightYear":"2024","copyrightHolder":{"@id":"https:\/\/www.credativ.de\/#organization"}},{"@type":"WebPage","@id":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/","url":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/","name":"Quick Benchmark: Improvements to Large Object Dumping in Postgres 17 - credativ\u00ae","isPartOf":{"@id":"https:\/\/www.credativ.de\/en\/#website"},"datePublished":"2024-12-17T09:00:50+00:00","breadcrumb":{"@id":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.credativ.de\/en\/"},{"@type":"ListItem","position":2,"name":"Quick Benchmark: Improvements to Large Object Dumping in Postgres 17"}]},{"@type":"WebSite","@id":"https:\/\/www.credativ.de\/en\/#website","url":"https:\/\/www.credativ.de\/en\/","name":"credativ GmbH","description":"","publisher":{"@id":"https:\/\/www.credativ.de\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.credativ.de\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":["Organization","Place"],"@id":"https:\/\/www.credativ.de\/en\/#organization","name":"credativ\u00ae","url":"https:\/\/www.credativ.de\/en\/","logo":{"@id":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/#local-main-organization-logo"},"image":{"@id":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/#local-main-organization-logo"},"sameAs":["https:\/\/www.facebook.com\/credativDE\/","https:\/\/x.com\/credativde","https:\/\/mastodon.social\/@credativde","https:\/\/www.linkedin.com\/company\/credativ-gmbh","https:\/\/www.instagram.com\/credativ\/"],"description":"Die credativ GmbH ist ein f\u00fchrendes, auf Open Source Software spezialisiertes IT-Dienstleistungs- und Beratungsunternehmen. Wir bieten umfassende und professionelle Services, von Beratung und Infrastruktur-Betrieb \u00fcber 24\/7 Support bis hin zu individuellen L\u00f6sungen und Schulungen. Unser Fokus liegt auf dem ganzheitlichen Management von gesch\u00e4ftskritischen Open-Source-Systemen, darunter Betriebssysteme (z.B. Linux), Datenbanken (z.B. PostgreSQL), Konfigurationsmanagement (z.B. Ansible, Puppet) und Virtualisierung. Als engagierter Teil der Open-Source-Community unterst\u00fctzen wir unsere Kunden dabei, die Vorteile freier Software sicher, stabil und effizient in ihrer IT-Umgebung zu nutzen.","legalName":"credativ GmbH","foundingDate":"2025-03-01","duns":"316387060","numberOfEmployees":{"@type":"QuantitativeValue","minValue":"11","maxValue":"50"},"address":{"@id":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/#local-main-place-address"},"geo":{"@type":"GeoCoordinates","latitude":"51.1732374","longitude":"6.392010099999999"},"telephone":["+4921619174200","08002733284"],"contactPoint":{"@type":"ContactPoint","telephone":"08002733284","email":"vertrieb@credativ.de"},"openingHoursSpecification":[{"@type":"OpeningHoursSpecification","dayOfWeek":["Monday","Tuesday","Wednesday","Thursday","Friday"],"opens":"09:00","closes":"17:00"},{"@type":"OpeningHoursSpecification","dayOfWeek":["Saturday","Sunday"],"opens":"00:00","closes":"00:00"}],"email":"info@credativ.de","areaServed":"D-A-CH","vatID":"DE452151696"},{"@type":"Person","@id":"https:\/\/www.credativ.de\/en\/#\/schema\/person\/038c79105ce9b5fd885631da3f806698","name":"Michael Banck","description":"Michael Banck ist seit 2009 Mitarbeiter der credativ GmbH, sowie seit 2001 Mitglied des Debian Projekts und auch in weiteren Open Source Projekten aktiv. Als Mitglied des Datenbank-Teams von credativ hat er in den letzten Jahren verschiedene Kunden bei der L\u00f6sung von Problemen mit und dem t\u00e4glichen Betrieb von PostgreSQL\u00ae, sowie bei der Einf\u00fchrung von Hochverf\u00fcgbarkeits-L\u00f6sungen im Bereich Datenbanken unterst\u00fctzt und beraten."},{"@type":"PostalAddress","@id":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/#local-main-place-address","streetAddress":"Hennes-Weisweiler-Allee 23","addressLocality":"M\u00f6nchengladbach","postalCode":"41179","addressRegion":"Deutschland","addressCountry":"DE"},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.credativ.de\/en\/blog\/postgresql-en\/quick-benchmark-improvements-to-large-object-dumping-in-postgres-17\/#local-main-organization-logo","url":"https:\/\/www.credativ.de\/wp-content\/uploads\/2025\/04\/credativ-logo-right.svg","contentUrl":"https:\/\/www.credativ.de\/wp-content\/uploads\/2025\/04\/credativ-logo-right.svg","caption":"credativ\u00ae"}]},"geo.placename":"M\u00f6nchengladbach","geo.position":{"lat":"51.1732374","long":"6.392010099999999"},"geo.region":"Germany"},"_links":{"self":[{"href":"https:\/\/www.credativ.de\/en\/wp-json\/wp\/v2\/posts\/8431","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.credativ.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.credativ.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.credativ.de\/en\/wp-json\/wp\/v2\/users\/37"}],"replies":[{"embeddable":true,"href":"https:\/\/www.credativ.de\/en\/wp-json\/wp\/v2\/comments?post=8431"}],"version-history":[{"count":2,"href":"https:\/\/www.credativ.de\/en\/wp-json\/wp\/v2\/posts\/8431\/revisions"}],"predecessor-version":[{"id":8435,"href":"https:\/\/www.credativ.de\/en\/wp-json\/wp\/v2\/posts\/8431\/revisions\/8435"}],"wp:attachment":[{"href":"https:\/\/www.credativ.de\/en\/wp-json\/wp\/v2\/media?parent=8431"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.credativ.de\/en\/wp-json\/wp\/v2\/categories?post=8431"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.credativ.de\/en\/wp-json\/wp\/v2\/tags?post=8431"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}