![]() With the wealth of command-line options that pg_dump and pg_restore provide, it is important to use those options in an optimal way based on the scenario at hand. ![]() ![]() Using these utilities is the default way to perform data migrations when you can afford downtime (within some acceptable maintenance window). In summary, pg_dump and pg_restore are the most commonly used, native, robust, and proven utilities for homogenous (Postgres to Postgres) database migrations. ![]() Just be sure to place the client machine as close as possible to the target database, or the source database, or both. If only one of the two is possible, you can choose either. TIP: Place the client machine on which you perform pg_dump/pg_restore as close as possible to the source and the target database, to avoid performance issues with bad network latency. For example, you can restrict the restore to specific database objects/entities, specify parallel jobs for the restore, and so on. Similar to pg_dump, pg_restore also provides a lot of control over how you restore the archive. You can use the pg_restore utility to restore a PostgreSQL database from an archive created by pg_dump. Each thread dumps a specific table, and this command line option controls how many tables to dump simultaneously. Using the -jobs/-j command line option, which provides the ability to specify the number of concurrent threads to use for the dump.Control of the format of the dump options include plain-text or the custom or directory formats, which are compressed by default.Fine-grained control of dumping specific schemas, specific tables, just the data, etc.Some of the common and most useful command-line options for pg_dump enable you to do things like: pg_dump gives you multiple command-line options (I call them flags) that you can use to control the format and the content of the data you’re backing up. pg_dump takes a consistent snapshot of your Postgres database, even if the database is being actively used. Pg_dump is a standard and traditional utility for backing up a PostgreSQL database. more than 1 day with pg_dump/pg_restore.įaster migrations with pg_dump & pg_restore With this tool we observed the migration of a large Postgres table (~1.4TB) complete in 7 hrs. The good news is we’ll walk through a nifty Python tool for migrating large database tables in Postgres. With large tables, using pg_dump and pg_restore to migrate your database might not be the most optimal approach. Let’s also explore scenarios in which you need to migrate very large Postgres tables. In this post, let’s walk through the tradeoffs to consider while using pg_dump and pg_restore for your Postgres database migrations-and how you can optimize your migrations for speed, too. during weekends, nights, etc.), then simple Postgres utilities such as pg_dump and pg_restore can be used. But if you can afford some downtime for the migration during a specific maintenance window (e.g. For those of you who cannot afford any downtime during the Postgres migration process, there are of course data migration services that can help. The majority of the effort usually goes into deciding on and implementing the right strategy for performing the data migration. As Azure Database for PostgreSQL runs open source Postgres, in many cases the application migration can be drop-in and doesn’t require a ton effort. Why do people pick Postgres? Because of the richness of PostgreSQL-and features like stored procedures, JSONB, PostGIS for geospatial workloads, and the many useful Postgres extensions, including my personal favorite: Citus.Ī large chunk of the migrations that I help people with are homogenous Postgres-to-Postgres data migrations to the cloud. I work with customers migrating from homogenous sources (PostgreSQL), and also from heterogenous database sources such as Oracle and Redshift. In my day to day, I get to work with many customers migrating their data to Postgres.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |