16 April 2020

Recover Data from Corrupt Tables in PostgreSQL®

PostgreSQL® is an extremely robust database, to which most of our clients also entrust their data. However, if errors do occur, they are usually due to the storage system, where individual bits or bytes flip, or entire blocks become corrupted. We demonstrate how to recover data from corrupt tables.

In case of an error, the user is confronted with messages that originate from the storage layer or other PostgreSQL® subsystems:

postgres=# select * from t;
ERROR:  missing chunk number 0 for toast value 192436 in pg_toast_192430

postgres=# select * from a;
ERROR:  invalid memory alloc request size 18446744073709551613

If only individual tuples are corrupt, one can partially help by reading them out individually, e.g., according to id, which often does not help further:

select * from t where id = 1;

It is more promising to address the tuples directly by their internal tuple ID, called in PostgreSQL® ctid:

select * from t where ctid = '(0,1)';

To read out all tuples, we use a loop in plpgsql:

for page in 0 .. pages-1 loop
  for item in 1 .. ??? loop
     select * from t where ctid = '('||page||','||item||')' into r;
     return next r;
  end loop;
end loop;

We still need the number of pages in the table, which we get from pg_relation_size(), and the number of tuples on the page, for which we utilize the pageinspect extension.

select pg_relation_size(relname) / current_setting('block_size')::int into pages;

for page in 0 .. pages-1 loop
  for item in select t_ctid from heap_page_items(get_raw_page(relname::text, page)) loop
    SELECT * FROM t WHERE ctid=item into r;
    if r is not null then
      return next r;
    end if;
  end loop;
end loop;

Now comes the most important part: Accessing the damaged tuples or pages causes errors that we must catch with a begin..exception..end block. We pass the error messages to the user as . Furthermore, the function should not only work for one table but also receive a parameter . The entire plpgsql function then looks like this:

create extension pageinspect;

create or replace function read_table(relname regclass)
returns setof record
as $$
declare
  pages int;
  page int;
  ctid tid;
  r record;
  sql_state text;
  error text;
begin
  select pg_relation_size(relname) / current_setting('block_size')::int into pages;

  for page in 0 .. pages-1 loop

    begin

      for ctid in select t_ctid from heap_page_items(get_raw_page(relname::text, page)) loop
        begin
          execute format('SELECT * FROM %s WHERE ctid=%L', relname, ctid) into r;
          if r is not null then
            return next r;
          end if;
        exception -- bad tuple
          when others then
            get stacked diagnostics sql_state := RETURNED_SQLSTATE;
            get stacked diagnostics error := MESSAGE_TEXT;
            raise notice 'Skipping ctid %: %: %', ctid, sql_state, error;
        end;
      end loop;

    exception -- bad page
      when others then
        get stacked diagnostics sql_state := RETURNED_SQLSTATE;
        get stacked diagnostics error := MESSAGE_TEXT;
        raise notice 'Skipping page %: %: %', page, sql_state, error;
    end;

  end loop;
end;
$$ language plpgsql;

Since the function returns “record”, the table signature must be provided during the call:

postgres =# select * from read_table('t') as t(t text);
NOTICE:  Skipping ctid (0,1): XX000: missing chunk number 0 for toast value 192436 in pg_toast_192430
       t
───────────────
 one
 two
 three
...

An alternative variant writes the read data directly into a new table:

postgres =# select rescue_table('t');
NOTICE:  t: page 0 of 1
NOTICE:  Skipping ctid (0,1): XX000: missing chunk number 0 for toast value 192436 in pg_toast_192430
                                    rescue_table
─────────────────────────────────────────────────────────────────────────────────────
 rescue_table t into t_rescue: 0 of 1 pages are bad, 1 bad tuples, 100 tuples copied
(1 row)

The table t_rescue was created automatically.

create extension pageinspect;

create or replace function rescue_table(relname regclass, savename name default null, "create" boolean default true)
returns text
as $$
declare
  pages int;
  page int;
  ctid tid;
  row_count bigint;
  good_tuples bigint := 0;
  bad_pages bigint := 0;
  bad_tuples bigint := 0;
  sql_state text;
  error text;
begin
  if savename is null then
    savename := relname || '_rescue';
  end if;
  if rescue_table.create then
    execute format('CREATE TABLE %s (LIKE %s)', savename, relname);
  end if;

  select pg_relation_size(relname) / current_setting('block_size')::int into pages;

  for page in 0 .. pages-1 loop
    if page % 10000 = 0 then
      raise notice '%: page % of %', relname, page, pages;
    end if;

    begin

      for ctid in select t_ctid from heap_page_items(get_raw_page(relname::text, page)) loop
        begin
          execute format('INSERT INTO %s SELECT * FROM %s WHERE ctid=%L', savename, relname, ctid);
          get diagnostics row_count = ROW_COUNT;
          good_tuples := good_tuples + row_count;
        exception -- bad tuple
          when others then
            get stacked diagnostics sql_state := RETURNED_SQLSTATE;
            get stacked diagnostics error := MESSAGE_TEXT;
            raise notice 'Skipping ctid %: %: %', ctid, sql_state, error;
            bad_tuples := bad_tuples + 1;
        end;
      end loop;

    exception -- bad page
      when others then
        get stacked diagnostics sql_state := RETURNED_SQLSTATE;
        get stacked diagnostics error := MESSAGE_TEXT;
        raise notice 'Skipping page %: %: %', page, sql_state, error;
        bad_pages := bad_pages + 1;
    end;

  end loop;

  error := format('rescue_table %s into %s: %s of %s pages are bad, %s bad tuples, %s tuples copied',
    relname, savename, bad_pages, pages, bad_tuples, good_tuples);
  raise log '%', error;
  return error;
end;
$$ language plpgsql;

The SQL scripts are also available in the pg_dirtyread git repository.

Support

Should you require assistance with data recovery or the general use of PostgreSQL®, our PostgreSQL® Competence Center is at your disposal – 24 hours a day, 365 days a year, if desired.

We look forward to hearing from you.

 

This article was originally written by Christoph Berg.

Categories: PostgreSQL®
Tags: Data recovery PostgreSQL® Security

cR

About the author

credativ Redaktion

about the person

Dieser Account dient als Sammelpunkt für die wertvollen Beiträge ehemaliger Mitarbeiter von credativ. Wir bedanken uns für ihre großartigen Inhalte, die das technische Wissen in unserem Blog über die Jahre hinweg bereichert haben. Ihre Artikel bleiben hier weiterhin für unsere Leser zugänglich.

View posts


Beitrag teilen: